Test Report: KVM_Linux_crio 18669

                    
                      cfcc925aedaed70a8d6bc80f04f086c17ea387e6:2024-04-19:34110
                    
                

Test fail (11/207)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-310054 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-310054 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.943158093s)

                                                
                                                
-- stdout --
	* [addons-310054] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-310054" primary control-plane node in "addons-310054" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-310054 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	* Verifying ingress addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-310054 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, yakd, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 19:18:13.906398  375092 out.go:291] Setting OutFile to fd 1 ...
	I0419 19:18:13.906693  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:18:13.906705  375092 out.go:304] Setting ErrFile to fd 2...
	I0419 19:18:13.906709  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:18:13.906916  375092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 19:18:13.907548  375092 out.go:298] Setting JSON to false
	I0419 19:18:13.908446  375092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3640,"bootTime":1713550654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 19:18:13.908523  375092 start.go:139] virtualization: kvm guest
	I0419 19:18:13.910812  375092 out.go:177] * [addons-310054] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 19:18:13.912282  375092 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 19:18:13.913577  375092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 19:18:13.912384  375092 notify.go:220] Checking for updates...
	I0419 19:18:13.914874  375092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 19:18:13.916154  375092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 19:18:13.917482  375092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 19:18:13.918691  375092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 19:18:13.920331  375092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 19:18:13.951780  375092 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 19:18:13.953062  375092 start.go:297] selected driver: kvm2
	I0419 19:18:13.953084  375092 start.go:901] validating driver "kvm2" against <nil>
	I0419 19:18:13.953096  375092 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 19:18:13.953817  375092 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:18:13.953891  375092 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 19:18:13.968710  375092 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 19:18:13.968784  375092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 19:18:13.969013  375092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 19:18:13.969083  375092 cni.go:84] Creating CNI manager for ""
	I0419 19:18:13.969095  375092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 19:18:13.969102  375092 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 19:18:13.969165  375092 start.go:340] cluster config:
	{Name:addons-310054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-310054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:18:13.969258  375092 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:18:13.971111  375092 out.go:177] * Starting "addons-310054" primary control-plane node in "addons-310054" cluster
	I0419 19:18:13.972504  375092 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 19:18:13.972545  375092 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 19:18:13.972556  375092 cache.go:56] Caching tarball of preloaded images
	I0419 19:18:13.972687  375092 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 19:18:13.972699  375092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 19:18:13.973003  375092 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/config.json ...
	I0419 19:18:13.973024  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/config.json: {Name:mkcd04c7e1390fb90d38ccc9269e37e6aa7af895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:13.973159  375092 start.go:360] acquireMachinesLock for addons-310054: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 19:18:13.973205  375092 start.go:364] duration metric: took 33.105µs to acquireMachinesLock for "addons-310054"
	I0419 19:18:13.973223  375092 start.go:93] Provisioning new machine with config: &{Name:addons-310054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-310054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 19:18:13.973276  375092 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 19:18:13.975029  375092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 19:18:13.975188  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:18:13.975230  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:18:13.989757  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I0419 19:18:13.990308  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:18:13.990881  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:18:13.990904  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:18:13.991281  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:18:13.991476  375092 main.go:141] libmachine: (addons-310054) Calling .GetMachineName
	I0419 19:18:13.991640  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:13.991769  375092 start.go:159] libmachine.API.Create for "addons-310054" (driver="kvm2")
	I0419 19:18:13.991798  375092 client.go:168] LocalClient.Create starting
	I0419 19:18:13.991838  375092 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 19:18:14.126783  375092 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 19:18:14.277652  375092 main.go:141] libmachine: Running pre-create checks...
	I0419 19:18:14.277680  375092 main.go:141] libmachine: (addons-310054) Calling .PreCreateCheck
	I0419 19:18:14.278237  375092 main.go:141] libmachine: (addons-310054) Calling .GetConfigRaw
	I0419 19:18:14.278728  375092 main.go:141] libmachine: Creating machine...
	I0419 19:18:14.278748  375092 main.go:141] libmachine: (addons-310054) Calling .Create
	I0419 19:18:14.278894  375092 main.go:141] libmachine: (addons-310054) Creating KVM machine...
	I0419 19:18:14.280288  375092 main.go:141] libmachine: (addons-310054) DBG | found existing default KVM network
	I0419 19:18:14.281169  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:14.281009  375114 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0419 19:18:14.281263  375092 main.go:141] libmachine: (addons-310054) DBG | created network xml: 
	I0419 19:18:14.281285  375092 main.go:141] libmachine: (addons-310054) DBG | <network>
	I0419 19:18:14.281298  375092 main.go:141] libmachine: (addons-310054) DBG |   <name>mk-addons-310054</name>
	I0419 19:18:14.281310  375092 main.go:141] libmachine: (addons-310054) DBG |   <dns enable='no'/>
	I0419 19:18:14.281319  375092 main.go:141] libmachine: (addons-310054) DBG |   
	I0419 19:18:14.281339  375092 main.go:141] libmachine: (addons-310054) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0419 19:18:14.281355  375092 main.go:141] libmachine: (addons-310054) DBG |     <dhcp>
	I0419 19:18:14.281372  375092 main.go:141] libmachine: (addons-310054) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0419 19:18:14.281403  375092 main.go:141] libmachine: (addons-310054) DBG |     </dhcp>
	I0419 19:18:14.281446  375092 main.go:141] libmachine: (addons-310054) DBG |   </ip>
	I0419 19:18:14.281498  375092 main.go:141] libmachine: (addons-310054) DBG |   
	I0419 19:18:14.281533  375092 main.go:141] libmachine: (addons-310054) DBG | </network>
	I0419 19:18:14.281545  375092 main.go:141] libmachine: (addons-310054) DBG | 
	I0419 19:18:14.287083  375092 main.go:141] libmachine: (addons-310054) DBG | trying to create private KVM network mk-addons-310054 192.168.39.0/24...
	I0419 19:18:14.356362  375092 main.go:141] libmachine: (addons-310054) DBG | private KVM network mk-addons-310054 192.168.39.0/24 created
	I0419 19:18:14.356404  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:14.356315  375114 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 19:18:14.356418  375092 main.go:141] libmachine: (addons-310054) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054 ...
	I0419 19:18:14.356437  375092 main.go:141] libmachine: (addons-310054) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 19:18:14.356454  375092 main.go:141] libmachine: (addons-310054) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 19:18:14.613155  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:14.613014  375114 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa...
	I0419 19:18:14.854784  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:14.854632  375114 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/addons-310054.rawdisk...
	I0419 19:18:14.854818  375092 main.go:141] libmachine: (addons-310054) DBG | Writing magic tar header
	I0419 19:18:14.854828  375092 main.go:141] libmachine: (addons-310054) DBG | Writing SSH key tar header
	I0419 19:18:14.854837  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:14.854786  375114 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054 ...
	I0419 19:18:14.854975  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054
	I0419 19:18:14.855048  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054 (perms=drwx------)
	I0419 19:18:14.855081  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 19:18:14.855125  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 19:18:14.855158  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 19:18:14.855181  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 19:18:14.855196  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 19:18:14.855210  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 19:18:14.855224  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 19:18:14.855239  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home/jenkins
	I0419 19:18:14.855262  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 19:18:14.855275  375092 main.go:141] libmachine: (addons-310054) DBG | Checking permissions on dir: /home
	I0419 19:18:14.855291  375092 main.go:141] libmachine: (addons-310054) DBG | Skipping /home - not owner
	I0419 19:18:14.855334  375092 main.go:141] libmachine: (addons-310054) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 19:18:14.855349  375092 main.go:141] libmachine: (addons-310054) Creating domain...
	I0419 19:18:14.856215  375092 main.go:141] libmachine: (addons-310054) define libvirt domain using xml: 
	I0419 19:18:14.856241  375092 main.go:141] libmachine: (addons-310054) <domain type='kvm'>
	I0419 19:18:14.856252  375092 main.go:141] libmachine: (addons-310054)   <name>addons-310054</name>
	I0419 19:18:14.856266  375092 main.go:141] libmachine: (addons-310054)   <memory unit='MiB'>4000</memory>
	I0419 19:18:14.856332  375092 main.go:141] libmachine: (addons-310054)   <vcpu>2</vcpu>
	I0419 19:18:14.856366  375092 main.go:141] libmachine: (addons-310054)   <features>
	I0419 19:18:14.856381  375092 main.go:141] libmachine: (addons-310054)     <acpi/>
	I0419 19:18:14.856391  375092 main.go:141] libmachine: (addons-310054)     <apic/>
	I0419 19:18:14.856400  375092 main.go:141] libmachine: (addons-310054)     <pae/>
	I0419 19:18:14.856409  375092 main.go:141] libmachine: (addons-310054)     
	I0419 19:18:14.856418  375092 main.go:141] libmachine: (addons-310054)   </features>
	I0419 19:18:14.856428  375092 main.go:141] libmachine: (addons-310054)   <cpu mode='host-passthrough'>
	I0419 19:18:14.856436  375092 main.go:141] libmachine: (addons-310054)   
	I0419 19:18:14.856447  375092 main.go:141] libmachine: (addons-310054)   </cpu>
	I0419 19:18:14.856460  375092 main.go:141] libmachine: (addons-310054)   <os>
	I0419 19:18:14.856470  375092 main.go:141] libmachine: (addons-310054)     <type>hvm</type>
	I0419 19:18:14.856501  375092 main.go:141] libmachine: (addons-310054)     <boot dev='cdrom'/>
	I0419 19:18:14.856540  375092 main.go:141] libmachine: (addons-310054)     <boot dev='hd'/>
	I0419 19:18:14.856555  375092 main.go:141] libmachine: (addons-310054)     <bootmenu enable='no'/>
	I0419 19:18:14.856572  375092 main.go:141] libmachine: (addons-310054)   </os>
	I0419 19:18:14.856581  375092 main.go:141] libmachine: (addons-310054)   <devices>
	I0419 19:18:14.856587  375092 main.go:141] libmachine: (addons-310054)     <disk type='file' device='cdrom'>
	I0419 19:18:14.856599  375092 main.go:141] libmachine: (addons-310054)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/boot2docker.iso'/>
	I0419 19:18:14.856608  375092 main.go:141] libmachine: (addons-310054)       <target dev='hdc' bus='scsi'/>
	I0419 19:18:14.856613  375092 main.go:141] libmachine: (addons-310054)       <readonly/>
	I0419 19:18:14.856619  375092 main.go:141] libmachine: (addons-310054)     </disk>
	I0419 19:18:14.856627  375092 main.go:141] libmachine: (addons-310054)     <disk type='file' device='disk'>
	I0419 19:18:14.856649  375092 main.go:141] libmachine: (addons-310054)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 19:18:14.856658  375092 main.go:141] libmachine: (addons-310054)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/addons-310054.rawdisk'/>
	I0419 19:18:14.856666  375092 main.go:141] libmachine: (addons-310054)       <target dev='hda' bus='virtio'/>
	I0419 19:18:14.856671  375092 main.go:141] libmachine: (addons-310054)     </disk>
	I0419 19:18:14.856678  375092 main.go:141] libmachine: (addons-310054)     <interface type='network'>
	I0419 19:18:14.856685  375092 main.go:141] libmachine: (addons-310054)       <source network='mk-addons-310054'/>
	I0419 19:18:14.856692  375092 main.go:141] libmachine: (addons-310054)       <model type='virtio'/>
	I0419 19:18:14.856698  375092 main.go:141] libmachine: (addons-310054)     </interface>
	I0419 19:18:14.856705  375092 main.go:141] libmachine: (addons-310054)     <interface type='network'>
	I0419 19:18:14.856710  375092 main.go:141] libmachine: (addons-310054)       <source network='default'/>
	I0419 19:18:14.856717  375092 main.go:141] libmachine: (addons-310054)       <model type='virtio'/>
	I0419 19:18:14.856723  375092 main.go:141] libmachine: (addons-310054)     </interface>
	I0419 19:18:14.856730  375092 main.go:141] libmachine: (addons-310054)     <serial type='pty'>
	I0419 19:18:14.856735  375092 main.go:141] libmachine: (addons-310054)       <target port='0'/>
	I0419 19:18:14.856745  375092 main.go:141] libmachine: (addons-310054)     </serial>
	I0419 19:18:14.856772  375092 main.go:141] libmachine: (addons-310054)     <console type='pty'>
	I0419 19:18:14.856794  375092 main.go:141] libmachine: (addons-310054)       <target type='serial' port='0'/>
	I0419 19:18:14.856813  375092 main.go:141] libmachine: (addons-310054)     </console>
	I0419 19:18:14.856830  375092 main.go:141] libmachine: (addons-310054)     <rng model='virtio'>
	I0419 19:18:14.856844  375092 main.go:141] libmachine: (addons-310054)       <backend model='random'>/dev/random</backend>
	I0419 19:18:14.856854  375092 main.go:141] libmachine: (addons-310054)     </rng>
	I0419 19:18:14.856864  375092 main.go:141] libmachine: (addons-310054)     
	I0419 19:18:14.856874  375092 main.go:141] libmachine: (addons-310054)     
	I0419 19:18:14.856885  375092 main.go:141] libmachine: (addons-310054)   </devices>
	I0419 19:18:14.856895  375092 main.go:141] libmachine: (addons-310054) </domain>
	I0419 19:18:14.856912  375092 main.go:141] libmachine: (addons-310054) 
	I0419 19:18:14.862634  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:79:9e:2f in network default
	I0419 19:18:14.863124  375092 main.go:141] libmachine: (addons-310054) Ensuring networks are active...
	I0419 19:18:14.863142  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:14.863773  375092 main.go:141] libmachine: (addons-310054) Ensuring network default is active
	I0419 19:18:14.864148  375092 main.go:141] libmachine: (addons-310054) Ensuring network mk-addons-310054 is active
	I0419 19:18:14.864738  375092 main.go:141] libmachine: (addons-310054) Getting domain xml...
	I0419 19:18:14.865526  375092 main.go:141] libmachine: (addons-310054) Creating domain...
	I0419 19:18:16.093502  375092 main.go:141] libmachine: (addons-310054) Waiting to get IP...
	I0419 19:18:16.094332  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:16.094672  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:16.094732  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:16.094668  375114 retry.go:31] will retry after 210.201174ms: waiting for machine to come up
	I0419 19:18:16.306121  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:16.306631  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:16.306660  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:16.306574  375114 retry.go:31] will retry after 271.901246ms: waiting for machine to come up
	I0419 19:18:16.580133  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:16.580582  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:16.580613  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:16.580525  375114 retry.go:31] will retry after 336.01442ms: waiting for machine to come up
	I0419 19:18:16.918189  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:16.918670  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:16.918696  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:16.918614  375114 retry.go:31] will retry after 582.322426ms: waiting for machine to come up
	I0419 19:18:17.502401  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:17.502844  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:17.502877  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:17.502781  375114 retry.go:31] will retry after 725.080872ms: waiting for machine to come up
	I0419 19:18:18.229681  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:18.230090  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:18.230118  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:18.230042  375114 retry.go:31] will retry after 658.306671ms: waiting for machine to come up
	I0419 19:18:18.889778  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:18.890258  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:18.890288  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:18.890206  375114 retry.go:31] will retry after 785.035177ms: waiting for machine to come up
	I0419 19:18:19.676646  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:19.677077  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:19.677104  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:19.677031  375114 retry.go:31] will retry after 928.188465ms: waiting for machine to come up
	I0419 19:18:20.607227  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:20.607657  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:20.607691  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:20.607604  375114 retry.go:31] will retry after 1.818059832s: waiting for machine to come up
	I0419 19:18:22.428685  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:22.429143  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:22.429177  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:22.429078  375114 retry.go:31] will retry after 2.122206885s: waiting for machine to come up
	I0419 19:18:24.553432  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:24.553910  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:24.553942  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:24.553853  375114 retry.go:31] will retry after 2.813497401s: waiting for machine to come up
	I0419 19:18:27.369726  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:27.370122  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:27.370157  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:27.370048  375114 retry.go:31] will retry after 3.020392034s: waiting for machine to come up
	I0419 19:18:30.392844  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:30.393373  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find current IP address of domain addons-310054 in network mk-addons-310054
	I0419 19:18:30.393393  375092 main.go:141] libmachine: (addons-310054) DBG | I0419 19:18:30.393326  375114 retry.go:31] will retry after 4.493344181s: waiting for machine to come up
	I0419 19:18:34.888026  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:34.888560  375092 main.go:141] libmachine: (addons-310054) Found IP for machine: 192.168.39.199
	I0419 19:18:34.888595  375092 main.go:141] libmachine: (addons-310054) Reserving static IP address...
	I0419 19:18:34.888615  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has current primary IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:34.888958  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find host DHCP lease matching {name: "addons-310054", mac: "52:54:00:8d:af:a1", ip: "192.168.39.199"} in network mk-addons-310054
	I0419 19:18:34.960948  375092 main.go:141] libmachine: (addons-310054) DBG | Getting to WaitForSSH function...
	I0419 19:18:34.960986  375092 main.go:141] libmachine: (addons-310054) Reserved static IP address: 192.168.39.199
	I0419 19:18:34.961006  375092 main.go:141] libmachine: (addons-310054) Waiting for SSH to be available...
	I0419 19:18:34.963658  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:34.964015  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054
	I0419 19:18:34.964045  375092 main.go:141] libmachine: (addons-310054) DBG | unable to find defined IP address of network mk-addons-310054 interface with MAC address 52:54:00:8d:af:a1
	I0419 19:18:34.964242  375092 main.go:141] libmachine: (addons-310054) DBG | Using SSH client type: external
	I0419 19:18:34.964268  375092 main.go:141] libmachine: (addons-310054) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa (-rw-------)
	I0419 19:18:34.964487  375092 main.go:141] libmachine: (addons-310054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 19:18:34.964522  375092 main.go:141] libmachine: (addons-310054) DBG | About to run SSH command:
	I0419 19:18:34.964539  375092 main.go:141] libmachine: (addons-310054) DBG | exit 0
	I0419 19:18:34.968035  375092 main.go:141] libmachine: (addons-310054) DBG | SSH cmd err, output: exit status 255: 
	I0419 19:18:34.968084  375092 main.go:141] libmachine: (addons-310054) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0419 19:18:34.968098  375092 main.go:141] libmachine: (addons-310054) DBG | command : exit 0
	I0419 19:18:34.968117  375092 main.go:141] libmachine: (addons-310054) DBG | err     : exit status 255
	I0419 19:18:34.968133  375092 main.go:141] libmachine: (addons-310054) DBG | output  : 
	I0419 19:18:37.970306  375092 main.go:141] libmachine: (addons-310054) DBG | Getting to WaitForSSH function...
	I0419 19:18:37.973015  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:37.973436  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:37.973468  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:37.973607  375092 main.go:141] libmachine: (addons-310054) DBG | Using SSH client type: external
	I0419 19:18:37.973634  375092 main.go:141] libmachine: (addons-310054) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa (-rw-------)
	I0419 19:18:37.973674  375092 main.go:141] libmachine: (addons-310054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 19:18:37.973725  375092 main.go:141] libmachine: (addons-310054) DBG | About to run SSH command:
	I0419 19:18:37.973755  375092 main.go:141] libmachine: (addons-310054) DBG | exit 0
	I0419 19:18:38.100813  375092 main.go:141] libmachine: (addons-310054) DBG | SSH cmd err, output: <nil>: 
	I0419 19:18:38.101072  375092 main.go:141] libmachine: (addons-310054) KVM machine creation complete!
	I0419 19:18:38.101341  375092 main.go:141] libmachine: (addons-310054) Calling .GetConfigRaw
	I0419 19:18:38.101936  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:38.102155  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:38.102329  375092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 19:18:38.102345  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:18:38.103552  375092 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 19:18:38.103569  375092 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 19:18:38.103574  375092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 19:18:38.103580  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.105659  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.105966  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.105994  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.106133  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:38.106317  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.106490  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.106594  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:38.106768  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:38.106990  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:38.107006  375092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 19:18:38.220148  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 19:18:38.220182  375092 main.go:141] libmachine: Detecting the provisioner...
	I0419 19:18:38.220196  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.223024  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.223387  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.223422  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.223535  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:38.223760  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.223920  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.224077  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:38.224264  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:38.224426  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:38.224436  375092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 19:18:38.337958  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 19:18:38.338104  375092 main.go:141] libmachine: found compatible host: buildroot
	I0419 19:18:38.338153  375092 main.go:141] libmachine: Provisioning with buildroot...
	I0419 19:18:38.338172  375092 main.go:141] libmachine: (addons-310054) Calling .GetMachineName
	I0419 19:18:38.338479  375092 buildroot.go:166] provisioning hostname "addons-310054"
	I0419 19:18:38.338506  375092 main.go:141] libmachine: (addons-310054) Calling .GetMachineName
	I0419 19:18:38.338716  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.341387  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.341792  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.341822  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.341941  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:38.342124  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.342276  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.342425  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:38.342611  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:38.342799  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:38.342816  375092 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-310054 && echo "addons-310054" | sudo tee /etc/hostname
	I0419 19:18:38.471331  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-310054
	
	I0419 19:18:38.471363  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.474328  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.474819  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.474849  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.475097  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:38.475331  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.475502  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.475616  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:38.475766  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:38.475957  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:38.475980  375092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-310054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-310054/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-310054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 19:18:38.598790  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 19:18:38.598820  375092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 19:18:38.598861  375092 buildroot.go:174] setting up certificates
	I0419 19:18:38.598875  375092 provision.go:84] configureAuth start
	I0419 19:18:38.598893  375092 main.go:141] libmachine: (addons-310054) Calling .GetMachineName
	I0419 19:18:38.599219  375092 main.go:141] libmachine: (addons-310054) Calling .GetIP
	I0419 19:18:38.602075  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.602383  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.602403  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.602585  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.605302  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.605700  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.605729  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.605873  375092 provision.go:143] copyHostCerts
	I0419 19:18:38.605979  375092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 19:18:38.606182  375092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 19:18:38.606270  375092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 19:18:38.606366  375092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.addons-310054 san=[127.0.0.1 192.168.39.199 addons-310054 localhost minikube]
	I0419 19:18:38.896501  375092 provision.go:177] copyRemoteCerts
	I0419 19:18:38.896569  375092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 19:18:38.896595  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:38.899797  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.900221  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:38.900254  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:38.900399  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:38.900617  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:38.900823  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:38.900960  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:18:38.987384  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 19:18:39.013205  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 19:18:39.037713  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 19:18:39.061584  375092 provision.go:87] duration metric: took 462.691877ms to configureAuth
	I0419 19:18:39.061611  375092 buildroot.go:189] setting minikube options for container-runtime
	I0419 19:18:39.061830  375092 config.go:182] Loaded profile config "addons-310054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 19:18:39.061956  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:39.064577  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.064957  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.064983  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.065155  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:39.065324  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.065501  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.065606  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:39.065736  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:39.065900  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:39.065917  375092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 19:18:39.352804  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 19:18:39.352847  375092 main.go:141] libmachine: Checking connection to Docker...
	I0419 19:18:39.352860  375092 main.go:141] libmachine: (addons-310054) Calling .GetURL
	I0419 19:18:39.354153  375092 main.go:141] libmachine: (addons-310054) DBG | Using libvirt version 6000000
	I0419 19:18:39.356446  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.356810  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.356831  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.357035  375092 main.go:141] libmachine: Docker is up and running!
	I0419 19:18:39.357050  375092 main.go:141] libmachine: Reticulating splines...
	I0419 19:18:39.357058  375092 client.go:171] duration metric: took 25.365248149s to LocalClient.Create
	I0419 19:18:39.357085  375092 start.go:167] duration metric: took 25.365317002s to libmachine.API.Create "addons-310054"
	I0419 19:18:39.357094  375092 start.go:293] postStartSetup for "addons-310054" (driver="kvm2")
	I0419 19:18:39.357104  375092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 19:18:39.357148  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:39.357447  375092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 19:18:39.357526  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:39.359644  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.360071  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.360101  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.360254  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:39.360438  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.360616  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:39.360786  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:18:39.447305  375092 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 19:18:39.451966  375092 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 19:18:39.452004  375092 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 19:18:39.452099  375092 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 19:18:39.452133  375092 start.go:296] duration metric: took 95.027653ms for postStartSetup
	I0419 19:18:39.452170  375092 main.go:141] libmachine: (addons-310054) Calling .GetConfigRaw
	I0419 19:18:39.452786  375092 main.go:141] libmachine: (addons-310054) Calling .GetIP
	I0419 19:18:39.456335  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.456779  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.456810  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.457086  375092 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/config.json ...
	I0419 19:18:39.457316  375092 start.go:128] duration metric: took 25.484027658s to createHost
	I0419 19:18:39.457346  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:39.459744  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.460080  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.460110  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.460309  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:39.460510  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.460699  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.460869  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:39.461128  375092 main.go:141] libmachine: Using SSH client type: native
	I0419 19:18:39.461330  375092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0419 19:18:39.461347  375092 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 19:18:39.573347  375092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713554319.546632825
	
	I0419 19:18:39.573376  375092 fix.go:216] guest clock: 1713554319.546632825
	I0419 19:18:39.573386  375092 fix.go:229] Guest: 2024-04-19 19:18:39.546632825 +0000 UTC Remote: 2024-04-19 19:18:39.457331589 +0000 UTC m=+25.597593145 (delta=89.301236ms)
	I0419 19:18:39.573437  375092 fix.go:200] guest clock delta is within tolerance: 89.301236ms
	I0419 19:18:39.573446  375092 start.go:83] releasing machines lock for "addons-310054", held for 25.600229943s
	I0419 19:18:39.573473  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:39.573741  375092 main.go:141] libmachine: (addons-310054) Calling .GetIP
	I0419 19:18:39.576706  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.577052  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.577086  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.577206  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:39.577730  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:39.577905  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:18:39.578020  375092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 19:18:39.578069  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:39.578187  375092 ssh_runner.go:195] Run: cat /version.json
	I0419 19:18:39.578215  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:18:39.580684  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.580844  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.581041  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.581063  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.581126  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:39.581144  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:39.581155  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:39.581326  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.581381  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:18:39.581506  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:39.581535  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:18:39.581700  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:18:39.581689  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:18:39.581834  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:18:39.661468  375092 ssh_runner.go:195] Run: systemctl --version
	I0419 19:18:39.695807  375092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 19:18:39.853724  375092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 19:18:39.860341  375092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 19:18:39.860419  375092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 19:18:39.877332  375092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 19:18:39.877364  375092 start.go:494] detecting cgroup driver to use...
	I0419 19:18:39.877439  375092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 19:18:39.895994  375092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 19:18:39.910466  375092 docker.go:217] disabling cri-docker service (if available) ...
	I0419 19:18:39.910532  375092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 19:18:39.925014  375092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 19:18:39.938467  375092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 19:18:40.058008  375092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 19:18:40.208246  375092 docker.go:233] disabling docker service ...
	I0419 19:18:40.208333  375092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 19:18:40.223385  375092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 19:18:40.236768  375092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 19:18:40.363988  375092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 19:18:40.491027  375092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 19:18:40.506211  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 19:18:40.525449  375092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 19:18:40.525529  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.536445  375092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 19:18:40.536524  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.549286  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.560927  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.572142  375092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 19:18:40.583318  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.594184  375092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.612337  375092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 19:18:40.623435  375092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 19:18:40.633427  375092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 19:18:40.633514  375092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 19:18:40.647605  375092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 19:18:40.657742  375092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:18:40.783811  375092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 19:18:40.921725  375092 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 19:18:40.921833  375092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 19:18:40.927315  375092 start.go:562] Will wait 60s for crictl version
	I0419 19:18:40.927393  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:18:40.931371  375092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 19:18:40.971118  375092 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 19:18:40.971237  375092 ssh_runner.go:195] Run: crio --version
	I0419 19:18:40.999551  375092 ssh_runner.go:195] Run: crio --version
	I0419 19:18:41.029644  375092 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 19:18:41.031080  375092 main.go:141] libmachine: (addons-310054) Calling .GetIP
	I0419 19:18:41.033612  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:41.033948  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:18:41.033997  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:18:41.034190  375092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 19:18:41.038609  375092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:18:41.053610  375092 kubeadm.go:877] updating cluster {Name:addons-310054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-310054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 19:18:41.053725  375092 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 19:18:41.053770  375092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 19:18:41.091321  375092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 19:18:41.091391  375092 ssh_runner.go:195] Run: which lz4
	I0419 19:18:41.095758  375092 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0419 19:18:41.099943  375092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 19:18:41.099972  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0419 19:18:42.484934  375092 crio.go:462] duration metric: took 1.389217156s to copy over tarball
	I0419 19:18:42.485015  375092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 19:18:44.817303  375092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.332253927s)
	I0419 19:18:44.817350  375092 crio.go:469] duration metric: took 2.332378718s to extract the tarball
	I0419 19:18:44.817362  375092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 19:18:44.855193  375092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 19:18:44.896647  375092 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 19:18:44.896682  375092 cache_images.go:84] Images are preloaded, skipping loading
	I0419 19:18:44.896693  375092 kubeadm.go:928] updating node { 192.168.39.199 8443 v1.30.0 crio true true} ...
	I0419 19:18:44.896840  375092 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-310054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-310054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 19:18:44.896934  375092 ssh_runner.go:195] Run: crio config
	I0419 19:18:44.940112  375092 cni.go:84] Creating CNI manager for ""
	I0419 19:18:44.940140  375092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 19:18:44.940156  375092 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 19:18:44.940179  375092 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-310054 NodeName:addons-310054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 19:18:44.940322  375092 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-310054"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 19:18:44.940384  375092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 19:18:44.950405  375092 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 19:18:44.950475  375092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 19:18:44.959667  375092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 19:18:44.976257  375092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 19:18:44.992884  375092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0419 19:18:45.009843  375092 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0419 19:18:45.013637  375092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 19:18:45.025600  375092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:18:45.149278  375092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:18:45.166871  375092 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054 for IP: 192.168.39.199
	I0419 19:18:45.166898  375092 certs.go:194] generating shared ca certs ...
	I0419 19:18:45.166925  375092 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.167097  375092 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 19:18:45.321937  375092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt ...
	I0419 19:18:45.321972  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt: {Name:mka9f4d26237d139a5d8a8ca8b1a4f31bee60863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.322145  375092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key ...
	I0419 19:18:45.322158  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key: {Name:mkb89f64522ee721d39bba6030b07627bcec99ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.322239  375092 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 19:18:45.423550  375092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt ...
	I0419 19:18:45.423582  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt: {Name:mkc3face1e61aaf29474360b24971352a763423e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.423755  375092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key ...
	I0419 19:18:45.423780  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key: {Name:mke7f37c222c065f5691ac38896876e23fa709b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.423847  375092 certs.go:256] generating profile certs ...
	I0419 19:18:45.423903  375092 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.key
	I0419 19:18:45.423917  375092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.crt with IP's: []
	I0419 19:18:45.660073  375092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.crt ...
	I0419 19:18:45.660106  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.crt: {Name:mk19b80906da429d97c5a2ac342073a5346b1042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.660313  375092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.key ...
	I0419 19:18:45.660332  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/client.key: {Name:mk9117d527af0e6214ce85ae7f5cae8ca289b2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.660440  375092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key.a7897bce
	I0419 19:18:45.660466  375092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt.a7897bce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.199]
	I0419 19:18:45.751399  375092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt.a7897bce ...
	I0419 19:18:45.751438  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt.a7897bce: {Name:mk13b2af2502b40a0f152cfaf79af74e7437d454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.751614  375092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key.a7897bce ...
	I0419 19:18:45.751630  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key.a7897bce: {Name:mk4321757e3a55fe0eef1bb37d858a085792aab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.751700  375092 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt.a7897bce -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt
	I0419 19:18:45.751794  375092 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key.a7897bce -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key
	I0419 19:18:45.751866  375092 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.key
	I0419 19:18:45.751891  375092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.crt with IP's: []
	I0419 19:18:45.887181  375092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.crt ...
	I0419 19:18:45.887215  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.crt: {Name:mka0b37487b48718dfb64246cb07349e86eaed34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.887390  375092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.key ...
	I0419 19:18:45.887403  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.key: {Name:mk8f3ef963416c07f86e193a81c43b64d6c8211f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:45.887565  375092 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 19:18:45.887602  375092 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 19:18:45.887625  375092 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 19:18:45.887647  375092 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 19:18:45.888253  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 19:18:45.921622  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 19:18:45.955749  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 19:18:45.983787  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 19:18:46.010528  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 19:18:46.036285  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 19:18:46.061929  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 19:18:46.089434  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/addons-310054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 19:18:46.116185  375092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 19:18:46.143283  375092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 19:18:46.161705  375092 ssh_runner.go:195] Run: openssl version
	I0419 19:18:46.167461  375092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 19:18:46.178456  375092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:18:46.182948  375092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:18:46.182997  375092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 19:18:46.188682  375092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 19:18:46.199636  375092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 19:18:46.203718  375092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 19:18:46.203775  375092 kubeadm.go:391] StartCluster: {Name:addons-310054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-310054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:18:46.203856  375092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 19:18:46.203903  375092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 19:18:46.239983  375092 cri.go:89] found id: ""
	I0419 19:18:46.240071  375092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 19:18:46.249955  375092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 19:18:46.259233  375092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 19:18:46.268938  375092 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 19:18:46.268964  375092 kubeadm.go:156] found existing configuration files:
	
	I0419 19:18:46.269012  375092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 19:18:46.277888  375092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 19:18:46.277957  375092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 19:18:46.287458  375092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 19:18:46.296679  375092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 19:18:46.296738  375092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 19:18:46.306510  375092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 19:18:46.315593  375092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 19:18:46.315655  375092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 19:18:46.325967  375092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 19:18:46.335730  375092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 19:18:46.335809  375092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 19:18:46.345218  375092 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 19:18:46.529030  375092 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 19:18:56.834814  375092 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 19:18:56.834904  375092 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 19:18:56.834993  375092 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 19:18:56.835120  375092 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 19:18:56.835233  375092 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 19:18:56.835300  375092 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 19:18:56.837131  375092 out.go:204]   - Generating certificates and keys ...
	I0419 19:18:56.837216  375092 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 19:18:56.837299  375092 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 19:18:56.837378  375092 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 19:18:56.837495  375092 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 19:18:56.837567  375092 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 19:18:56.837651  375092 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 19:18:56.837735  375092 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 19:18:56.837919  375092 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-310054 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0419 19:18:56.838003  375092 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 19:18:56.838159  375092 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-310054 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0419 19:18:56.838257  375092 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 19:18:56.838354  375092 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 19:18:56.838433  375092 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 19:18:56.838529  375092 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 19:18:56.838592  375092 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 19:18:56.838667  375092 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 19:18:56.838757  375092 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 19:18:56.838834  375092 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 19:18:56.838910  375092 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 19:18:56.839005  375092 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 19:18:56.839073  375092 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 19:18:56.840862  375092 out.go:204]   - Booting up control plane ...
	I0419 19:18:56.840986  375092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 19:18:56.841075  375092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 19:18:56.841161  375092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 19:18:56.841288  375092 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 19:18:56.841404  375092 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 19:18:56.841477  375092 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 19:18:56.841633  375092 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 19:18:56.841705  375092 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 19:18:56.841779  375092 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001216499s
	I0419 19:18:56.841850  375092 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 19:18:56.841912  375092 kubeadm.go:309] [api-check] The API server is healthy after 5.00221789s
	I0419 19:18:56.842023  375092 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 19:18:56.842172  375092 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 19:18:56.842229  375092 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 19:18:56.842400  375092 kubeadm.go:309] [mark-control-plane] Marking the node addons-310054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 19:18:56.842459  375092 kubeadm.go:309] [bootstrap-token] Using token: 59lgb7.6ronnvfc2ifupay5
	I0419 19:18:56.844109  375092 out.go:204]   - Configuring RBAC rules ...
	I0419 19:18:56.844231  375092 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 19:18:56.844356  375092 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 19:18:56.844535  375092 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 19:18:56.844721  375092 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 19:18:56.844873  375092 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 19:18:56.844985  375092 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 19:18:56.845120  375092 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 19:18:56.845192  375092 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 19:18:56.845260  375092 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 19:18:56.845275  375092 kubeadm.go:309] 
	I0419 19:18:56.845359  375092 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 19:18:56.845367  375092 kubeadm.go:309] 
	I0419 19:18:56.845459  375092 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 19:18:56.845469  375092 kubeadm.go:309] 
	I0419 19:18:56.845497  375092 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 19:18:56.845587  375092 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 19:18:56.845679  375092 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 19:18:56.845688  375092 kubeadm.go:309] 
	I0419 19:18:56.845761  375092 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 19:18:56.845770  375092 kubeadm.go:309] 
	I0419 19:18:56.845839  375092 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 19:18:56.845847  375092 kubeadm.go:309] 
	I0419 19:18:56.845923  375092 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 19:18:56.846016  375092 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 19:18:56.846107  375092 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 19:18:56.846120  375092 kubeadm.go:309] 
	I0419 19:18:56.846236  375092 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 19:18:56.846346  375092 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 19:18:56.846356  375092 kubeadm.go:309] 
	I0419 19:18:56.846473  375092 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 59lgb7.6ronnvfc2ifupay5 \
	I0419 19:18:56.846598  375092 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea \
	I0419 19:18:56.846619  375092 kubeadm.go:309] 	--control-plane 
	I0419 19:18:56.846623  375092 kubeadm.go:309] 
	I0419 19:18:56.846717  375092 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 19:18:56.846727  375092 kubeadm.go:309] 
	I0419 19:18:56.846833  375092 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 59lgb7.6ronnvfc2ifupay5 \
	I0419 19:18:56.846972  375092 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea 
	I0419 19:18:56.846984  375092 cni.go:84] Creating CNI manager for ""
	I0419 19:18:56.846995  375092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 19:18:56.848703  375092 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 19:18:56.850191  375092 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 19:18:56.864028  375092 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 19:18:56.889097  375092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 19:18:56.889196  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:56.889204  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-310054 minikube.k8s.io/updated_at=2024_04_19T19_18_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=addons-310054 minikube.k8s.io/primary=true
	I0419 19:18:56.919569  375092 ops.go:34] apiserver oom_adj: -16
	I0419 19:18:56.999549  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:57.500273  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:57.999996  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:58.500288  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:58.999673  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:18:59.499819  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:00.000605  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:00.500608  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:00.999837  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:01.500307  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:02.000064  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:02.499825  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:03.000494  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:03.500578  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:03.999600  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:04.500077  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:04.999709  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:05.500596  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:06.000372  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:06.500221  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:06.999889  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:07.500019  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:08.000411  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:08.500069  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:09.000047  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:09.500222  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:10.000230  375092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 19:19:10.125426  375092 kubeadm.go:1107] duration metric: took 13.236316767s to wait for elevateKubeSystemPrivileges
	W0419 19:19:10.125473  375092 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 19:19:10.125484  375092 kubeadm.go:393] duration metric: took 23.921714575s to StartCluster
	I0419 19:19:10.125508  375092 settings.go:142] acquiring lock: {Name:mk4d89c3e562693d551452a3da7ca47ff322d54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:19:10.125642  375092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 19:19:10.126270  375092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/kubeconfig: {Name:mk754e069328c06a767f4b9e66452a46be84b49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:19:10.126510  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 19:19:10.126537  375092 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 19:19:10.128080  375092 out.go:177] * Verifying Kubernetes components...
	I0419 19:19:10.126634  375092 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0419 19:19:10.126730  375092 config.go:182] Loaded profile config "addons-310054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 19:19:10.128227  375092 addons.go:69] Setting cloud-spanner=true in profile "addons-310054"
	I0419 19:19:10.129303  375092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 19:19:10.129340  375092 addons.go:234] Setting addon cloud-spanner=true in "addons-310054"
	I0419 19:19:10.128248  375092 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-310054"
	I0419 19:19:10.129487  375092 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-310054"
	I0419 19:19:10.128255  375092 addons.go:69] Setting inspektor-gadget=true in profile "addons-310054"
	I0419 19:19:10.129529  375092 addons.go:234] Setting addon inspektor-gadget=true in "addons-310054"
	I0419 19:19:10.129536  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.129561  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.128260  375092 addons.go:69] Setting registry=true in profile "addons-310054"
	I0419 19:19:10.129634  375092 addons.go:234] Setting addon registry=true in "addons-310054"
	I0419 19:19:10.129666  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.128267  375092 addons.go:69] Setting default-storageclass=true in profile "addons-310054"
	I0419 19:19:10.129720  375092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-310054"
	I0419 19:19:10.128253  375092 addons.go:69] Setting yakd=true in profile "addons-310054"
	I0419 19:19:10.129807  375092 addons.go:234] Setting addon yakd=true in "addons-310054"
	I0419 19:19:10.129841  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.128269  375092 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-310054"
	I0419 19:19:10.129910  375092 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-310054"
	I0419 19:19:10.129937  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.129991  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130008  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130016  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130052  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130054  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130077  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130081  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130092  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.128275  375092 addons.go:69] Setting gcp-auth=true in profile "addons-310054"
	I0419 19:19:10.130249  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130262  375092 mustload.go:65] Loading cluster: addons-310054
	I0419 19:19:10.130265  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.128278  375092 addons.go:69] Setting storage-provisioner=true in profile "addons-310054"
	I0419 19:19:10.130286  375092 addons.go:234] Setting addon storage-provisioner=true in "addons-310054"
	I0419 19:19:10.130313  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.128279  375092 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-310054"
	I0419 19:19:10.130324  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130352  375092 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-310054"
	I0419 19:19:10.128283  375092 addons.go:69] Setting helm-tiller=true in profile "addons-310054"
	I0419 19:19:10.130384  375092 addons.go:234] Setting addon helm-tiller=true in "addons-310054"
	I0419 19:19:10.128289  375092 addons.go:69] Setting volumesnapshots=true in profile "addons-310054"
	I0419 19:19:10.130455  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.130489  375092 addons.go:234] Setting addon volumesnapshots=true in "addons-310054"
	I0419 19:19:10.130539  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.130697  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130719  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130751  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.130797  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.130819  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.130914  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.128283  375092 addons.go:69] Setting metrics-server=true in profile "addons-310054"
	I0419 19:19:10.130961  375092 addons.go:234] Setting addon metrics-server=true in "addons-310054"
	I0419 19:19:10.128302  375092 addons.go:69] Setting ingress-dns=true in profile "addons-310054"
	I0419 19:19:10.130985  375092 addons.go:234] Setting addon ingress-dns=true in "addons-310054"
	I0419 19:19:10.128306  375092 addons.go:69] Setting ingress=true in profile "addons-310054"
	I0419 19:19:10.131006  375092 addons.go:234] Setting addon ingress=true in "addons-310054"
	I0419 19:19:10.129389  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.130938  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.131135  375092 config.go:182] Loaded profile config "addons-310054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 19:19:10.131178  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.131245  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.131327  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.131347  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.131492  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.131506  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.131521  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.131527  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.131743  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.131764  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.131953  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.132342  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.132378  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.156332  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I0419 19:19:10.156553  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0419 19:19:10.156950  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.157088  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.157497  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.157523  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.157667  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.157679  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.158075  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.158655  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.158710  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.159092  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0419 19:19:10.159268  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.159746  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.159846  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.159870  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.160214  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.160231  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.160343  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0419 19:19:10.160615  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.160685  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0419 19:19:10.160929  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.161020  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.161076  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.161606  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.161628  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.161770  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.161784  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.162204  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.162916  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.162966  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.163654  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.163732  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0419 19:19:10.164489  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.164552  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.169103  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.169152  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.170485  375092 addons.go:234] Setting addon default-storageclass=true in "addons-310054"
	I0419 19:19:10.170539  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.170891  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.170937  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.171432  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.172508  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.172531  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.173099  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.173382  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.176696  375092 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-310054"
	I0419 19:19:10.176748  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.177116  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.177143  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.177344  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0419 19:19:10.177932  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.178525  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.178544  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.178943  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.179205  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0419 19:19:10.179941  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.179980  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.185213  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.185843  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.185869  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.186439  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.187014  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.187060  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.187813  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I0419 19:19:10.188299  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.188888  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.188915  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.189302  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.189950  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.190011  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.198441  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0419 19:19:10.199190  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.199802  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.199834  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.200211  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.200849  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.200902  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.201695  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0419 19:19:10.203834  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0419 19:19:10.203845  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0419 19:19:10.203869  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0419 19:19:10.204264  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.204349  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.204351  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.204755  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.204779  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.204919  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.204935  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.205028  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.205049  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.205103  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.205535  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.205595  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.205683  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.205731  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.205982  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.206544  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.206586  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.208408  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.210488  375092 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0419 19:19:10.209087  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.211449  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I0419 19:19:10.211953  375092 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 19:19:10.211968  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0419 19:19:10.211988  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.213267  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.213285  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.213703  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.213777  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.214090  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.214824  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.214848  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.215315  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.215557  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.216009  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.216094  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.219115  375092 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0419 19:19:10.216680  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.217035  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.217809  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.219218  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0419 19:19:10.220797  375092 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0419 19:19:10.220820  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0419 19:19:10.220842  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.220930  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.221097  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.221275  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.221364  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.221404  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.223203  375092 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0419 19:19:10.221869  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.223242  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.222579  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0419 19:19:10.223615  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.224685  375092 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0419 19:19:10.224703  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0419 19:19:10.223852  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.224723  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.224542  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.224548  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0419 19:19:10.224854  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.224899  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.224929  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.225423  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.225926  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.225955  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.226107  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.226120  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.226539  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.226812  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.229988  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.230023  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.229994  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.230083  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.230716  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0419 19:19:10.230788  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.232288  375092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0419 19:19:10.230716  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.231051  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.231077  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.231112  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.232496  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0419 19:19:10.232929  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.233744  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.235846  375092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 19:19:10.234859  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.234878  375092 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 19:19:10.235225  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.235361  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.235382  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.235702  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.239556  375092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 19:19:10.240982  375092 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 19:19:10.241002  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0419 19:19:10.241022  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.239522  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0419 19:19:10.237481  375092 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 19:19:10.241134  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 19:19:10.241144  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.238420  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.241180  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.238446  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.238473  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.237375  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.242026  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.242058  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.242184  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.242397  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.242420  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.242465  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.243351  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.243371  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.243731  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.243971  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.244941  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33899
	I0419 19:19:10.245381  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.245703  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.246064  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.246084  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.246148  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.246164  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.246311  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.246373  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.246487  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.246577  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.246647  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.246676  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.246722  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.247017  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.249038  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0419 19:19:10.247088  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.247133  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0419 19:19:10.247236  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.247488  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.247532  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.247766  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0419 19:19:10.248293  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.248433  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0419 19:19:10.252763  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I0419 19:19:10.258275  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0419 19:19:10.256811  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.256846  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.257444  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.257499  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.257665  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.259441  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0419 19:19:10.259453  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42409
	I0419 19:19:10.260056  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.260133  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.260805  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.260902  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0419 19:19:10.260963  375092 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0419 19:19:10.261250  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.262179  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0419 19:19:10.263475  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0419 19:19:10.262193  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.261507  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.261550  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.262211  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.262220  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.261330  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.262819  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:10.265152  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.265196  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.265491  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.265509  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.266356  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0419 19:19:10.266584  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.267545  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0419 19:19:10.267588  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.267952  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.268855  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.269031  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.269126  375092 out.go:177]   - Using image docker.io/registry:2.8.3
	I0419 19:19:10.273844  375092 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0419 19:19:10.268805  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.275574  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0419 19:19:10.275596  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.268143  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.276976  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0419 19:19:10.277040  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.269579  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.273859  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0419 19:19:10.269197  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0419 19:19:10.269269  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.269537  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.277542  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.278437  375092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0419 19:19:10.279896  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0419 19:19:10.279920  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0419 19:19:10.279940  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.278587  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.278601  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.278617  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.278866  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.279079  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.282152  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.282523  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.282909  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:10.282932  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:10.290891  375092 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0419 19:19:10.290207  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.291656  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0419 19:19:10.292685  375092 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0419 19:19:10.292745  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0419 19:19:10.292772  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.292109  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.295093  375092 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0419 19:19:10.293826  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.296231  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0419 19:19:10.296693  375092 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 19:19:10.296713  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0419 19:19:10.296733  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.296794  375092 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0419 19:19:10.297418  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.302780  375092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0419 19:19:10.302797  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0419 19:19:10.297896  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.302845  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.302869  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.298413  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.302913  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.302938  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.298895  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.302961  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.302979  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.299205  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.299252  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.303034  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.303043  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.303051  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.299777  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.299948  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.301588  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.302229  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.303129  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.302241  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.303148  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.302818  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.303169  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.303190  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.303917  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.303963  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.303994  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.304030  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.304043  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.304080  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.304116  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.304138  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.304190  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.304212  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.304370  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.304440  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.304493  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.304717  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.304770  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.304777  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.305064  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.305077  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.305323  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.305317  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.308253  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.308301  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.308591  375092 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 19:19:10.308608  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 19:19:10.308627  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.308789  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.308814  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.309024  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.309209  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.309412  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.309588  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.309897  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0419 19:19:10.310495  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.311282  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.311300  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.311832  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.311891  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.312140  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.312478  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.312501  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.312538  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.312724  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.312964  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.313131  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	W0419 19:19:10.313328  375092 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37556->192.168.39.199:22: read: connection reset by peer
	I0419 19:19:10.313363  375092 retry.go:31] will retry after 137.884263ms: ssh: handshake failed: read tcp 192.168.39.1:37556->192.168.39.199:22: read: connection reset by peer
	I0419 19:19:10.313621  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0419 19:19:10.313813  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.314068  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:10.392323  375092 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0419 19:19:10.314469  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:10.392391  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:10.394071  375092 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0419 19:19:10.394094  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0419 19:19:10.394127  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.394651  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:10.394912  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:10.396794  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:10.530169  375092 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0419 19:19:10.397972  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.398624  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.593823  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0419 19:19:10.597255  375092 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0419 19:19:10.619638  375092 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0419 19:19:10.627426  375092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0419 19:19:10.642244  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0419 19:19:10.643834  375092 out.go:177]   - Using image docker.io/busybox:stable
	I0419 19:19:10.642313  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.642353  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0419 19:19:10.642536  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0419 19:19:10.642583  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.645452  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.645452  375092 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 19:19:10.645473  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0419 19:19:10.645487  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:10.645617  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.645827  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.649294  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.649787  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:10.649817  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:10.649967  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:10.650186  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:10.650371  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:10.650520  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:10.674098  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 19:19:10.674167  375092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 19:19:10.674218  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 19:19:10.698233  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0419 19:19:10.718925  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0419 19:19:10.718957  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0419 19:19:10.752297  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0419 19:19:10.755984  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0419 19:19:10.819903  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 19:19:10.826318  375092 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0419 19:19:10.826342  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0419 19:19:10.877715  375092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0419 19:19:10.877745  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0419 19:19:10.909827  375092 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0419 19:19:10.909853  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0419 19:19:10.960987  375092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 19:19:10.961012  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0419 19:19:10.963746  375092 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0419 19:19:10.963767  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0419 19:19:10.996754  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0419 19:19:10.996780  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0419 19:19:11.046390  375092 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0419 19:19:11.046414  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0419 19:19:11.075948  375092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0419 19:19:11.075979  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0419 19:19:11.076046  375092 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0419 19:19:11.076072  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0419 19:19:11.111345  375092 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0419 19:19:11.111369  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0419 19:19:11.200895  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0419 19:19:11.204121  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0419 19:19:11.219442  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0419 19:19:11.237394  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0419 19:19:11.237427  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0419 19:19:11.240176  375092 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0419 19:19:11.240198  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0419 19:19:11.250489  375092 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0419 19:19:11.250519  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0419 19:19:11.264051  375092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0419 19:19:11.264069  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0419 19:19:11.299945  375092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0419 19:19:11.299973  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0419 19:19:11.459685  375092 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0419 19:19:11.459719  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0419 19:19:11.460054  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0419 19:19:11.475362  375092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0419 19:19:11.475388  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0419 19:19:11.485045  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0419 19:19:11.485066  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0419 19:19:11.504718  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0419 19:19:11.504746  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0419 19:19:11.764750  375092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0419 19:19:11.764779  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0419 19:19:11.796306  375092 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0419 19:19:11.796333  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0419 19:19:11.834998  375092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 19:19:11.835027  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0419 19:19:11.899020  375092 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 19:19:11.899049  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0419 19:19:12.114171  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0419 19:19:12.114196  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0419 19:19:12.139669  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0419 19:19:12.140112  375092 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 19:19:12.140139  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0419 19:19:12.151751  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 19:19:12.428049  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0419 19:19:12.428084  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0419 19:19:12.475348  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0419 19:19:12.812142  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0419 19:19:12.812172  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0419 19:19:13.147336  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0419 19:19:13.147365  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0419 19:19:13.620896  375092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 19:19:13.620930  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0419 19:19:14.052843  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0419 19:19:17.521740  375092 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0419 19:19:17.521796  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:17.525317  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:17.525834  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:17.525861  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:17.526094  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:17.526344  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:17.526541  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:17.526683  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:17.991800  375092 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0419 19:19:18.106741  375092 addons.go:234] Setting addon gcp-auth=true in "addons-310054"
	I0419 19:19:18.106815  375092 host.go:66] Checking if "addons-310054" exists ...
	I0419 19:19:18.107144  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:18.107180  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:18.123333  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0419 19:19:18.123835  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:18.124399  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:18.124424  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:18.124797  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:18.125304  375092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 19:19:18.125354  375092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 19:19:18.140896  375092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I0419 19:19:18.141451  375092 main.go:141] libmachine: () Calling .GetVersion
	I0419 19:19:18.142019  375092 main.go:141] libmachine: Using API Version  1
	I0419 19:19:18.142045  375092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 19:19:18.142380  375092 main.go:141] libmachine: () Calling .GetMachineName
	I0419 19:19:18.142582  375092 main.go:141] libmachine: (addons-310054) Calling .GetState
	I0419 19:19:18.144069  375092 main.go:141] libmachine: (addons-310054) Calling .DriverName
	I0419 19:19:18.144329  375092 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0419 19:19:18.144358  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHHostname
	I0419 19:19:18.147243  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:18.147736  375092 main.go:141] libmachine: (addons-310054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:af:a1", ip: ""} in network mk-addons-310054: {Iface:virbr1 ExpiryTime:2024-04-19 20:18:29 +0000 UTC Type:0 Mac:52:54:00:8d:af:a1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-310054 Clientid:01:52:54:00:8d:af:a1}
	I0419 19:19:18.147779  375092 main.go:141] libmachine: (addons-310054) DBG | domain addons-310054 has defined IP address 192.168.39.199 and MAC address 52:54:00:8d:af:a1 in network mk-addons-310054
	I0419 19:19:18.147953  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHPort
	I0419 19:19:18.148157  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHKeyPath
	I0419 19:19:18.148364  375092 main.go:141] libmachine: (addons-310054) Calling .GetSSHUsername
	I0419 19:19:18.148546  375092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/addons-310054/id_rsa Username:docker}
	I0419 19:19:19.277684  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.603543656s)
	I0419 19:19:19.277756  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.277768  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.277779  375092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.603524533s)
	I0419 19:19:19.277829  375092 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0419 19:19:19.277884  375092 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.603688666s)
	I0419 19:19:19.277940  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.57966985s)
	I0419 19:19:19.277986  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.277999  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278034  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.52570504s)
	I0419 19:19:19.278079  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278086  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.522075021s)
	I0419 19:19:19.278091  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278103  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278110  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278179  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.45823682s)
	I0419 19:19:19.278193  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278201  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278247  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.077313745s)
	I0419 19:19:19.278266  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.074116496s)
	I0419 19:19:19.278286  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278299  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278281  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278331  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278412  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.058938004s)
	I0419 19:19:19.278435  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278446  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278531  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.818453371s)
	I0419 19:19:19.278549  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278558  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278658  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.138957574s)
	I0419 19:19:19.278678  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278688  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.278822  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.127032512s)
	I0419 19:19:19.278840  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.278852  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.278861  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278868  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	W0419 19:19:19.278867  375092 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 19:19:19.278910  375092 retry.go:31] will retry after 216.378014ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0419 19:19:19.278954  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.278978  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.278985  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.278992  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.278999  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.279017  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.803635902s)
	I0419 19:19:19.279045  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.279054  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.279106  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.279129  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.279135  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.279142  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.279149  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.279170  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.279207  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.279225  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.279232  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.279240  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.279247  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.279295  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.279313  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.279313  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.279328  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.279477  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.279512  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.279519  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.279526  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.279532  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.281346  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.281382  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.281389  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.281397  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.281404  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.281668  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.281723  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.281729  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.281739  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.281746  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.279319  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.282065  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.282074  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.282344  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.282377  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.282396  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.282402  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.282425  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.282432  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.282441  375092 addons.go:470] Verifying addon registry=true in "addons-310054"
	I0419 19:19:19.282560  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.282582  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.282588  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.284771  375092 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-310054 service yakd-dashboard -n yakd-dashboard
	
	I0419 19:19:19.282961  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.282987  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283041  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283058  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283082  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283096  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283113  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283156  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283752  375092 node_ready.go:35] waiting up to 6m0s for node "addons-310054" to be "Ready" ...
	I0419 19:19:19.283880  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283897  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283920  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283937  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.283964  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.283984  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.285018  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.642673138s)
	I0419 19:19:19.286623  375092 out.go:177] * Verifying registry addon...
	I0419 19:19:19.286646  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286654  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286630  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286668  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286679  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286685  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.286705  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.288345  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.286712  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.288405  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.288260  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.288425  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.288327  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.288466  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.288413  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.288607  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.288618  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.288627  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.288648  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.289323  375092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0419 19:19:19.289532  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.289554  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.289554  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.289572  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.289578  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.289584  375092 addons.go:470] Verifying addon ingress=true in "addons-310054"
	I0419 19:19:19.289600  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.289611  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.290872  375092 out.go:177] * Verifying ingress addon...
	I0419 19:19:19.290100  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.292087  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.290121  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.292111  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.292121  375092 addons.go:470] Verifying addon metrics-server=true in "addons-310054"
	I0419 19:19:19.290277  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.292854  375092 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0419 19:19:19.298896  375092 node_ready.go:49] node "addons-310054" has status "Ready":"True"
	I0419 19:19:19.298917  375092 node_ready.go:38] duration metric: took 12.209615ms for node "addons-310054" to be "Ready" ...
	I0419 19:19:19.298926  375092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:19:19.309770  375092 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0419 19:19:19.309794  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:19.316692  375092 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0419 19:19:19.316714  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:19.339002  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.339032  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.339347  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.339401  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.339426  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	W0419 19:19:19.339537  375092 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0419 19:19:19.340746  375092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8mdw7" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.358287  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:19.358307  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:19.358699  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:19.358721  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:19.358728  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:19.382655  375092 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mdw7" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:19.382686  375092 pod_ready.go:81] duration metric: took 41.915065ms for pod "coredns-7db6d8ff4d-8mdw7" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.382701  375092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9272m" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.468060  375092 pod_ready.go:92] pod "coredns-7db6d8ff4d-9272m" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:19.468086  375092 pod_ready.go:81] duration metric: took 85.377379ms for pod "coredns-7db6d8ff4d-9272m" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.468097  375092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.496475  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0419 19:19:19.500012  375092 pod_ready.go:92] pod "etcd-addons-310054" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:19.500032  375092 pod_ready.go:81] duration metric: took 31.928686ms for pod "etcd-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.500042  375092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.538431  375092 pod_ready.go:92] pod "kube-apiserver-addons-310054" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:19.538457  375092 pod_ready.go:81] duration metric: took 38.40769ms for pod "kube-apiserver-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.538469  375092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.692199  375092 pod_ready.go:92] pod "kube-controller-manager-addons-310054" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:19.692237  375092 pod_ready.go:81] duration metric: took 153.760246ms for pod "kube-controller-manager-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.692255  375092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ckg29" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:19.782657  375092 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-310054" context rescaled to 1 replicas
	I0419 19:19:19.794685  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:19.798920  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:20.090426  375092 pod_ready.go:92] pod "kube-proxy-ckg29" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:20.090450  375092 pod_ready.go:81] duration metric: took 398.187689ms for pod "kube-proxy-ckg29" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:20.090461  375092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:20.304228  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:20.308326  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:20.497426  375092 pod_ready.go:92] pod "kube-scheduler-addons-310054" in "kube-system" namespace has status "Ready":"True"
	I0419 19:19:20.497458  375092 pod_ready.go:81] duration metric: took 406.989894ms for pod "kube-scheduler-addons-310054" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:20.497472  375092 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace to be "Ready" ...
	I0419 19:19:20.803950  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:20.809689  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:21.026837  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.973934523s)
	I0419 19:19:21.026894  375092 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.882537323s)
	I0419 19:19:21.026959  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:21.026984  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:21.028692  375092 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0419 19:19:21.027378  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:21.027425  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:21.029924  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:21.029940  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:21.029953  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:21.031389  375092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0419 19:19:21.030217  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:21.032798  375092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0419 19:19:21.030248  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:21.031421  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:21.032840  375092 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-310054"
	I0419 19:19:21.032849  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0419 19:19:21.034301  375092 out.go:177] * Verifying csi-hostpath-driver addon...
	I0419 19:19:21.036667  375092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0419 19:19:21.068734  375092 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0419 19:19:21.068766  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:21.184370  375092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0419 19:19:21.184400  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0419 19:19:21.284118  375092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 19:19:21.284146  375092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0419 19:19:21.296830  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:21.300266  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:21.374558  375092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0419 19:19:21.489847  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.993308531s)
	I0419 19:19:21.489915  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:21.489931  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:21.490293  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:21.490314  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:21.490325  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:21.490334  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:21.490788  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:21.490809  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:21.553387  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:21.797153  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:21.799453  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:22.043371  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:22.301418  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:22.310536  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:22.625803  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:22.641845  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:22.657418  375092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.282808131s)
	I0419 19:19:22.657487  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:22.657507  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:22.657818  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:22.657902  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:22.657923  375092 main.go:141] libmachine: Making call to close driver server
	I0419 19:19:22.657926  375092 main.go:141] libmachine: (addons-310054) DBG | Closing plugin on server side
	I0419 19:19:22.657932  375092 main.go:141] libmachine: (addons-310054) Calling .Close
	I0419 19:19:22.658277  375092 main.go:141] libmachine: Successfully made call to close driver server
	I0419 19:19:22.658312  375092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 19:19:22.660239  375092 addons.go:470] Verifying addon gcp-auth=true in "addons-310054"
	I0419 19:19:22.661988  375092 out.go:177] * Verifying gcp-auth addon...
	I0419 19:19:22.664056  375092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0419 19:19:22.673916  375092 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0419 19:19:22.673943  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:22.796241  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:22.798053  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:23.048407  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:23.168747  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:23.294060  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:23.297003  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:23.551525  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:23.672420  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:23.794700  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:23.797908  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:24.044984  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:24.167849  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:24.295504  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:24.297729  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:24.542171  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:24.668005  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:24.795568  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:24.807441  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:25.003565  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:25.042836  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:25.167482  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:25.295967  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:25.300449  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:25.544890  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:25.667229  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:25.795511  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:25.797047  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:26.042498  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:26.169101  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:26.603409  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:26.604009  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:26.607943  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:26.667963  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:26.795759  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:26.798395  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:27.042518  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:27.168445  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:27.301874  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:27.305857  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:27.507158  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:27.542866  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:27.668008  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:28.093319  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:28.093558  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:28.096230  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:28.169310  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:28.295112  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:28.300157  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:28.542592  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:28.668353  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:28.797664  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:28.798074  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:29.042311  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:29.168283  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:29.304933  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:29.306827  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:29.546935  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:29.667948  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:29.796972  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:29.800536  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:30.004321  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:30.043520  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:30.167557  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:30.295411  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:30.297876  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:30.542735  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:30.667996  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:30.794427  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:30.796777  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:31.042252  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:31.167964  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:31.294847  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:31.300202  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:31.541787  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:31.668017  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:31.794496  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:31.797096  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:32.042490  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:32.169570  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:32.294835  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:32.298216  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:32.506005  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:32.542223  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:32.668474  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:32.797063  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:32.798866  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:33.043622  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:33.168786  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:33.296382  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:33.298637  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:33.545861  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:33.675477  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:33.794681  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:33.797375  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:34.042989  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:34.168999  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:34.297197  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:34.297706  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:34.542359  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:34.667968  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:34.794196  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:34.796892  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:35.003304  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:35.042050  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:35.167601  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:35.295301  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:35.297879  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:35.542991  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:35.668911  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:35.794691  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:35.797337  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:36.053370  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:36.168011  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:36.294530  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:36.298479  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:36.552424  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:36.668009  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:36.794280  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:36.797268  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:37.003465  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:37.046678  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:37.168995  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:37.297357  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:37.300481  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:37.543405  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:37.668232  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:37.795376  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:37.798470  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:38.043587  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:38.168428  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:38.296399  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:38.297470  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:38.541995  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:38.668100  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:38.795008  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:38.802430  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:39.005579  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:39.043982  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:39.168943  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:39.295963  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:39.297912  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:39.542346  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:39.670697  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:39.793877  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:39.797134  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:40.044062  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:40.168388  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:40.298727  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:40.302428  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:40.544790  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:40.668987  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:40.794248  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:40.797093  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:41.044281  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:41.167979  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:41.294294  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:41.296748  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:41.505674  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:41.543014  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:41.668945  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:41.870765  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:41.871056  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:42.052922  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:42.168592  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:42.294707  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:42.297360  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:42.542963  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:42.668432  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:42.799100  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:42.799256  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:43.044448  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:43.168200  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:43.294542  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:43.297069  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:43.542715  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:43.668387  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:43.795299  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:43.797329  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:44.004132  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:44.042952  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:44.168132  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:44.295840  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:44.297608  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:44.545583  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:44.669289  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:44.794889  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:44.797814  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:45.042854  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:45.168796  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:45.295620  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:45.298196  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:45.545817  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:45.667444  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:45.795027  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:45.799033  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:46.004566  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:46.042798  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:46.169460  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:46.294618  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:46.296984  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:46.543157  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:46.667683  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:46.795311  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:46.797273  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:47.042906  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:47.167561  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:47.293749  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:47.296332  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:47.541933  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:47.680349  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:47.798430  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:47.803476  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:48.043630  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:48.168099  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:48.294638  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:48.298120  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:48.504681  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:48.546720  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:48.667726  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:48.794707  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:48.797100  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:49.042692  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:49.167989  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:49.294101  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:49.296471  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:49.543788  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:49.668446  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:49.796091  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:49.803591  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:50.384787  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:50.387132  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:50.387483  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:50.389788  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:50.542908  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:50.667721  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:50.796575  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:50.799013  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:51.002833  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:51.041900  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:51.167549  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:51.296437  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:51.299647  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:51.542368  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:51.668249  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:51.800995  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:51.801033  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:52.043661  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:52.168229  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:52.297587  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:52.298622  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:52.542009  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:52.667690  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:52.800016  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:52.800166  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:53.003818  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:53.042731  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:53.169093  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:53.294459  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:53.297754  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:53.542110  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:53.668422  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:53.799013  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:53.801237  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:54.042374  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:54.168943  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:54.294200  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:54.296950  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:54.554168  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:55.033170  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:55.033293  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:55.035822  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:55.036713  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:55.042374  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:55.169128  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:55.294719  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:55.301547  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:55.544652  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:55.668465  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:55.795908  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:55.807601  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:56.049996  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:56.167140  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:56.294221  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:56.296708  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:56.547614  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:56.668435  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:56.800254  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:56.800307  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:57.042808  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:57.168285  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:57.294632  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:57.297555  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:57.502951  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:57.543879  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:57.669824  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:57.811826  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:57.813227  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:58.046250  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:58.167809  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:58.294462  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:58.297446  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:58.542773  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:58.668011  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:58.795893  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:58.799511  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:59.042034  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:59.167859  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:59.294439  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:59.296879  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:19:59.508183  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:19:59.543365  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:19:59.667905  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:19:59.794878  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:19:59.796844  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:00.050615  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:00.169858  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:00.295720  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:00.297814  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:00.548946  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:00.668351  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:00.794592  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:00.797324  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:01.042844  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:01.167516  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:01.293800  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:01.296977  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:01.542745  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:01.668406  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:01.794879  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:01.797801  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:02.004069  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:02.042164  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:02.167716  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:02.293989  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:02.297266  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:02.889827  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:02.890865  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:02.891564  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:02.891593  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:03.046738  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:03.168321  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:03.295364  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:03.297591  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:03.542226  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:03.668173  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:03.794422  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:03.796887  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:04.041880  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:04.167783  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:04.296066  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:04.298502  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:04.503178  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:04.542454  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:04.673385  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:04.795911  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:04.798426  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:05.042699  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:05.169373  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:05.297981  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:05.299251  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:05.542303  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:05.668097  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:05.794776  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:05.797122  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:06.042022  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:06.167339  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:06.295044  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:06.297657  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:06.504028  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:06.545746  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:06.668873  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:06.795975  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:06.802156  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:07.257550  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:07.260118  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:07.312101  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:07.312151  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:07.543390  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:07.667983  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:07.794715  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:07.797749  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:08.042201  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:08.167554  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:08.294873  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:08.298263  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:08.504382  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:08.542231  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:08.668281  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:08.795305  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:08.798165  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:09.042290  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:09.168228  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:09.452193  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:09.452409  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:09.542176  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:09.667979  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:09.794419  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:09.798571  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:10.051942  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:10.167916  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:10.294451  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:10.299601  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:10.542282  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:10.668156  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:10.795322  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0419 19:20:10.797537  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:11.003886  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:11.042209  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:11.167737  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:11.295332  375092 kapi.go:107] duration metric: took 52.006004676s to wait for kubernetes.io/minikube-addons=registry ...
	I0419 19:20:11.297285  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:11.542122  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:11.667401  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:11.797693  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:12.041933  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:12.168115  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:12.296978  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:12.542601  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:12.668466  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:12.798077  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:13.005579  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:13.042464  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:13.168378  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:13.297431  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:13.542753  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:13.668231  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:13.799085  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:14.042988  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:14.168106  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:14.298015  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:14.543170  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:14.669887  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:14.797871  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:15.042657  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:15.168548  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:15.298109  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:15.505166  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:15.542665  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:15.669024  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:15.801385  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:16.043417  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:16.168676  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:16.298305  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:16.544950  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:16.667591  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:16.798105  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:17.048543  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:17.168493  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:17.549995  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:17.554192  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:17.561826  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:17.667975  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:17.814525  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:18.042105  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:18.177041  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:18.298707  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:18.544832  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:18.667331  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:18.797478  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:19.042626  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:19.168015  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:19.297879  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:19.542184  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:19.668266  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:19.797407  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:20.003690  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:20.043279  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:20.168168  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:20.300275  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:20.556260  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:20.668119  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:20.797925  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:21.042291  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:21.167794  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:21.298546  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:21.542334  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:21.668038  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:21.797783  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:22.004265  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:22.047179  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:22.173994  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:22.297824  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:22.542121  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:22.667849  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:22.797811  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:23.041805  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:23.176911  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:23.300134  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:23.542997  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:23.668298  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:23.800714  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:24.044270  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:24.168156  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:24.334612  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:24.508349  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:24.551306  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:24.667923  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:24.798190  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:25.043225  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:25.167805  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:25.296906  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:25.547092  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:25.670196  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:25.801555  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:26.042793  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:26.169161  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:26.307796  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:26.549356  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:26.670895  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:26.802658  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:27.004093  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:27.053026  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:27.168355  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:27.421166  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:27.544424  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:27.668147  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:27.797202  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:28.045716  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:28.168278  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:28.299708  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:28.543056  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:28.667873  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:28.798080  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:29.043082  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:29.167920  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:29.298084  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:29.505589  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:29.543128  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:29.809237  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:29.809649  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:30.041968  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:30.167870  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:30.298161  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:30.543433  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:30.667672  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:30.797779  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:31.042869  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:31.168959  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:31.298468  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:31.542390  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:31.668737  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:31.797942  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:32.005772  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:32.042445  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:32.168270  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:32.297660  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:32.542534  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:32.668147  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:32.797126  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:33.042246  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:33.167561  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:33.299471  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:33.542530  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:33.668861  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:33.798092  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:34.042880  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:34.168269  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:34.297241  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:34.503415  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:34.542382  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:34.668299  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:34.797535  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:35.047126  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:35.168920  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:35.297720  375092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0419 19:20:35.542382  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:35.667091  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:35.796877  375092 kapi.go:107] duration metric: took 1m16.504019262s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0419 19:20:36.042810  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:36.168377  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:36.511747  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:36.551497  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:36.668521  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:37.042141  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:37.168337  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:37.541941  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:37.667748  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:38.044977  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:38.167908  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:38.543028  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:38.667897  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:39.004071  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:39.042788  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:39.168092  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:39.542017  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:39.668075  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:40.041433  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:40.174672  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:40.542598  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:40.669294  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0419 19:20:41.004109  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:41.041949  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:41.168815  375092 kapi.go:107] duration metric: took 1m18.50475468s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0419 19:20:41.170537  375092 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-310054 cluster.
	I0419 19:20:41.171681  375092 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0419 19:20:41.172772  375092 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0419 19:20:41.544982  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:42.063062  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:42.544306  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:43.005309  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:43.046015  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:43.543152  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:44.041490  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:44.545549  375092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0419 19:20:45.059948  375092 kapi.go:107] duration metric: took 1m24.023277961s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0419 19:20:45.062156  375092 out.go:177] * Enabled addons: storage-provisioner, yakd, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0419 19:20:45.063869  375092 addons.go:505] duration metric: took 1m34.93722953s for enable addons: enabled=[storage-provisioner yakd cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0419 19:20:45.071341  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:47.504390  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:49.505131  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:51.505235  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:53.505344  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:56.009925  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:20:58.503526  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:21:01.003384  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:21:03.004010  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:21:05.009051  375092 pod_ready.go:102] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"False"
	I0419 19:21:05.504676  375092 pod_ready.go:92] pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace has status "Ready":"True"
	I0419 19:21:05.504700  375092 pod_ready.go:81] duration metric: took 1m45.007222506s for pod "metrics-server-c59844bb4-bzp9k" in "kube-system" namespace to be "Ready" ...
	I0419 19:21:05.504712  375092 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-79zw6" in "kube-system" namespace to be "Ready" ...
	I0419 19:21:05.509877  375092 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-79zw6" in "kube-system" namespace has status "Ready":"True"
	I0419 19:21:05.509902  375092 pod_ready.go:81] duration metric: took 5.183686ms for pod "nvidia-device-plugin-daemonset-79zw6" in "kube-system" namespace to be "Ready" ...
	I0419 19:21:05.509923  375092 pod_ready.go:38] duration metric: took 1m46.21098783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 19:21:05.509949  375092 api_server.go:52] waiting for apiserver process to appear ...
	I0419 19:21:05.509979  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0419 19:21:05.510062  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0419 19:21:05.563619  375092 cri.go:89] found id: "2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae"
	I0419 19:21:05.563647  375092 cri.go:89] found id: ""
	I0419 19:21:05.563660  375092 logs.go:276] 1 containers: [2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae]
	I0419 19:21:05.563734  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.569416  375092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0419 19:21:05.569495  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0419 19:21:05.621696  375092 cri.go:89] found id: "924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:05.621723  375092 cri.go:89] found id: ""
	I0419 19:21:05.621732  375092 logs.go:276] 1 containers: [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f]
	I0419 19:21:05.621788  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.626127  375092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0419 19:21:05.626191  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0419 19:21:05.667371  375092 cri.go:89] found id: "0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8"
	I0419 19:21:05.667399  375092 cri.go:89] found id: ""
	I0419 19:21:05.667410  375092 logs.go:276] 1 containers: [0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8]
	I0419 19:21:05.667466  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.671883  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0419 19:21:05.671960  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0419 19:21:05.718176  375092 cri.go:89] found id: "d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da"
	I0419 19:21:05.718202  375092 cri.go:89] found id: ""
	I0419 19:21:05.718211  375092 logs.go:276] 1 containers: [d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da]
	I0419 19:21:05.718266  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.722720  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0419 19:21:05.722786  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0419 19:21:05.763488  375092 cri.go:89] found id: "39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:05.763510  375092 cri.go:89] found id: ""
	I0419 19:21:05.763520  375092 logs.go:276] 1 containers: [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8]
	I0419 19:21:05.763578  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.767910  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0419 19:21:05.767983  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0419 19:21:05.809113  375092 cri.go:89] found id: "41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec"
	I0419 19:21:05.809136  375092 cri.go:89] found id: ""
	I0419 19:21:05.809145  375092 logs.go:276] 1 containers: [41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec]
	I0419 19:21:05.809210  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:05.813608  375092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0419 19:21:05.813682  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0419 19:21:05.858420  375092 cri.go:89] found id: ""
	I0419 19:21:05.858458  375092 logs.go:276] 0 containers: []
	W0419 19:21:05.858470  375092 logs.go:278] No container was found matching "kindnet"
	I0419 19:21:05.858484  375092 logs.go:123] Gathering logs for CRI-O ...
	I0419 19:21:05.858501  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0419 19:21:06.949534  375092 logs.go:123] Gathering logs for container status ...
	I0419 19:21:06.949596  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 19:21:07.000206  375092 logs.go:123] Gathering logs for kubelet ...
	I0419 19:21:07.000251  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0419 19:21:07.057720  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.137424    1264 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.057891  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137465    1264 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.058018  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.137555    1264 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.058158  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.058413  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.058567  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.067400  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.067575  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	I0419 19:21:07.091813  375092 logs.go:123] Gathering logs for describe nodes ...
	I0419 19:21:07.091851  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 19:21:07.236204  375092 logs.go:123] Gathering logs for kube-apiserver [2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae] ...
	I0419 19:21:07.236244  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae"
	I0419 19:21:07.287090  375092 logs.go:123] Gathering logs for etcd [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f] ...
	I0419 19:21:07.287129  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:07.351536  375092 logs.go:123] Gathering logs for coredns [0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8] ...
	I0419 19:21:07.351579  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8"
	I0419 19:21:07.391648  375092 logs.go:123] Gathering logs for kube-proxy [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8] ...
	I0419 19:21:07.391683  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:07.439850  375092 logs.go:123] Gathering logs for dmesg ...
	I0419 19:21:07.439879  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 19:21:07.454411  375092 logs.go:123] Gathering logs for kube-scheduler [d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da] ...
	I0419 19:21:07.454446  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da"
	I0419 19:21:07.499974  375092 logs.go:123] Gathering logs for kube-controller-manager [41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec] ...
	I0419 19:21:07.500010  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec"
	I0419 19:21:07.576082  375092 out.go:304] Setting ErrFile to fd 2...
	I0419 19:21:07.576117  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0419 19:21:07.576203  375092 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0419 19:21:07.576213  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.576222  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.576239  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.576248  375092 out.go:239]   Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	W0419 19:21:07.576269  375092 out.go:239]   Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	I0419 19:21:07.576277  375092 out.go:304] Setting ErrFile to fd 2...
	I0419 19:21:07.576285  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:21:17.577517  375092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 19:21:17.600784  375092 api_server.go:72] duration metric: took 2m7.474200818s to wait for apiserver process to appear ...
	I0419 19:21:17.600819  375092 api_server.go:88] waiting for apiserver healthz status ...
	I0419 19:21:17.600869  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0419 19:21:17.600962  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0419 19:21:17.651577  375092 cri.go:89] found id: "2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae"
	I0419 19:21:17.651602  375092 cri.go:89] found id: ""
	I0419 19:21:17.651611  375092 logs.go:276] 1 containers: [2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae]
	I0419 19:21:17.651677  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.656288  375092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0419 19:21:17.656361  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0419 19:21:17.698523  375092 cri.go:89] found id: "924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:17.698552  375092 cri.go:89] found id: ""
	I0419 19:21:17.698564  375092 logs.go:276] 1 containers: [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f]
	I0419 19:21:17.698618  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.704296  375092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0419 19:21:17.704372  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0419 19:21:17.747921  375092 cri.go:89] found id: "0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8"
	I0419 19:21:17.747950  375092 cri.go:89] found id: ""
	I0419 19:21:17.747960  375092 logs.go:276] 1 containers: [0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8]
	I0419 19:21:17.748011  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.752386  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0419 19:21:17.752454  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0419 19:21:17.797379  375092 cri.go:89] found id: "d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da"
	I0419 19:21:17.797403  375092 cri.go:89] found id: ""
	I0419 19:21:17.797413  375092 logs.go:276] 1 containers: [d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da]
	I0419 19:21:17.797479  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.801950  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0419 19:21:17.802023  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0419 19:21:17.849027  375092 cri.go:89] found id: "39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:17.849053  375092 cri.go:89] found id: ""
	I0419 19:21:17.849066  375092 logs.go:276] 1 containers: [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8]
	I0419 19:21:17.849124  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.854376  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0419 19:21:17.854452  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0419 19:21:17.892625  375092 cri.go:89] found id: "41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec"
	I0419 19:21:17.892668  375092 cri.go:89] found id: ""
	I0419 19:21:17.892680  375092 logs.go:276] 1 containers: [41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec]
	I0419 19:21:17.892757  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:17.897064  375092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0419 19:21:17.897140  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0419 19:21:17.936054  375092 cri.go:89] found id: ""
	I0419 19:21:17.936084  375092 logs.go:276] 0 containers: []
	W0419 19:21:17.936092  375092 logs.go:278] No container was found matching "kindnet"
	I0419 19:21:17.936102  375092 logs.go:123] Gathering logs for coredns [0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8] ...
	I0419 19:21:17.936118  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8"
	I0419 19:21:17.974558  375092 logs.go:123] Gathering logs for kube-scheduler [d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da] ...
	I0419 19:21:17.974601  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da"
	I0419 19:21:18.020832  375092 logs.go:123] Gathering logs for kube-proxy [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8] ...
	I0419 19:21:18.020860  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:18.064744  375092 logs.go:123] Gathering logs for kube-controller-manager [41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec] ...
	I0419 19:21:18.064776  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec"
	I0419 19:21:18.130319  375092 logs.go:123] Gathering logs for dmesg ...
	I0419 19:21:18.130356  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 19:21:18.145751  375092 logs.go:123] Gathering logs for describe nodes ...
	I0419 19:21:18.145779  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 19:21:18.279819  375092 logs.go:123] Gathering logs for kube-apiserver [2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae] ...
	I0419 19:21:18.279850  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae"
	I0419 19:21:18.330934  375092 logs.go:123] Gathering logs for etcd [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f] ...
	I0419 19:21:18.330984  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:18.393842  375092 logs.go:123] Gathering logs for container status ...
	I0419 19:21:18.393875  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 19:21:18.459498  375092 logs.go:123] Gathering logs for kubelet ...
	I0419 19:21:18.459536  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0419 19:21:18.508095  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.137424    1264 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.508262  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137465    1264 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.508391  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.137555    1264 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.508530  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.508758  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.508895  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.517535  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	W0419 19:21:18.517681  375092 logs.go:138] Found kubelet problem: Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	I0419 19:21:18.542617  375092 logs.go:123] Gathering logs for CRI-O ...
	I0419 19:21:18.542644  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0419 19:21:19.330628  375092 out.go:304] Setting ErrFile to fd 2...
	I0419 19:21:19.330667  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0419 19:21:19.330751  375092 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0419 19:21:19.330768  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.137570    1264 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:19.330779  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: W0419 19:19:10.765677    1264 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:19.330792  375092 out.go:239]   Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:10 addons-310054 kubelet[1264]: E0419 19:19:10.765730    1264 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-310054" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-310054' and this object
	W0419 19:21:19.330799  375092 out.go:239]   Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:19 addons-310054 kubelet[1264]: W0419 19:19:19.146005    1264 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	W0419 19:21:19.330808  375092 out.go:239]   Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	  Apr 19 19:19:19 addons-310054 kubelet[1264]: E0419 19:19:19.146846    1264 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-310054" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-310054' and this object
	I0419 19:21:19.330819  375092 out.go:304] Setting ErrFile to fd 2...
	I0419 19:21:19.330828  375092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:21:29.332278  375092 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0419 19:21:29.338530  375092 api_server.go:279] https://192.168.39.199:8443/healthz returned 200:
	ok
	I0419 19:21:29.339801  375092 api_server.go:141] control plane version: v1.30.0
	I0419 19:21:29.339836  375092 api_server.go:131] duration metric: took 11.739008374s to wait for apiserver health ...
	I0419 19:21:29.339847  375092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 19:21:29.339895  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0419 19:21:29.339973  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0419 19:21:29.390122  375092 cri.go:89] found id: "2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae"
	I0419 19:21:29.390146  375092 cri.go:89] found id: ""
	I0419 19:21:29.390155  375092 logs.go:276] 1 containers: [2cbe27813fd65a512817f38d139346817ad5f2e8c19bafde23b7f5b74202fdae]
	I0419 19:21:29.390218  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.396072  375092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0419 19:21:29.396136  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0419 19:21:29.436540  375092 cri.go:89] found id: "924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:29.436569  375092 cri.go:89] found id: ""
	I0419 19:21:29.436581  375092 logs.go:276] 1 containers: [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f]
	I0419 19:21:29.436657  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.441232  375092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0419 19:21:29.441339  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0419 19:21:29.483792  375092 cri.go:89] found id: "0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8"
	I0419 19:21:29.483816  375092 cri.go:89] found id: ""
	I0419 19:21:29.483826  375092 logs.go:276] 1 containers: [0af938bcfc255f1eb5a1c69f010bf57f16dce618a591673dbd5af8250eca3bd8]
	I0419 19:21:29.483892  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.488959  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0419 19:21:29.489024  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0419 19:21:29.529549  375092 cri.go:89] found id: "d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da"
	I0419 19:21:29.529575  375092 cri.go:89] found id: ""
	I0419 19:21:29.529584  375092 logs.go:276] 1 containers: [d485a51bb39dfa2c12a30b4787a809a3ddea51753a1df6c5b01d83918e5a66da]
	I0419 19:21:29.529649  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.534209  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0419 19:21:29.534293  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0419 19:21:29.579747  375092 cri.go:89] found id: "39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:29.579777  375092 cri.go:89] found id: ""
	I0419 19:21:29.579788  375092 logs.go:276] 1 containers: [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8]
	I0419 19:21:29.579856  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.584583  375092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0419 19:21:29.584674  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0419 19:21:29.636715  375092 cri.go:89] found id: "41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec"
	I0419 19:21:29.636743  375092 cri.go:89] found id: ""
	I0419 19:21:29.636755  375092 logs.go:276] 1 containers: [41ee95d4114ec0f529a5b952ce49e6d5d6c2a1c69db163d2f25271543516f0ec]
	I0419 19:21:29.636818  375092 ssh_runner.go:195] Run: which crictl
	I0419 19:21:29.642165  375092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0419 19:21:29.642231  375092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0419 19:21:29.682359  375092 cri.go:89] found id: ""
	I0419 19:21:29.682389  375092 logs.go:276] 0 containers: []
	W0419 19:21:29.682399  375092 logs.go:278] No container was found matching "kindnet"
	I0419 19:21:29.682408  375092 logs.go:123] Gathering logs for describe nodes ...
	I0419 19:21:29.682423  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 19:21:29.808436  375092 logs.go:123] Gathering logs for etcd [924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f] ...
	I0419 19:21:29.808483  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 924004cbb87ed0091b20e1606ec645e6ddd63a8898193a74bb0177b48524ae2f"
	I0419 19:21:29.872916  375092 logs.go:123] Gathering logs for kube-proxy [39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8] ...
	I0419 19:21:29.872963  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39e17a49177e8977c882eb9566718a095d6062edec0a0825717ddcef871054a8"
	I0419 19:21:29.915093  375092 logs.go:123] Gathering logs for CRI-O ...
	I0419 19:21:29.915137  375092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-linux-amd64 start -p addons-310054 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 node stop m02 -v=7 --alsologtostderr
E0419 20:07:51.191745  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:08:32.151986  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.500914744s)

                                                
                                                
-- stdout --
	* Stopping node "ha-423356-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:07:47.647474  392842 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:07:47.647615  392842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:07:47.647625  392842 out.go:304] Setting ErrFile to fd 2...
	I0419 20:07:47.647629  392842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:07:47.648344  392842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:07:47.648808  392842 mustload.go:65] Loading cluster: ha-423356
	I0419 20:07:47.649780  392842 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:07:47.649807  392842 stop.go:39] StopHost: ha-423356-m02
	I0419 20:07:47.650338  392842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:07:47.650403  392842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:07:47.666117  392842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0419 20:07:47.666684  392842 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:07:47.667314  392842 main.go:141] libmachine: Using API Version  1
	I0419 20:07:47.667347  392842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:07:47.667732  392842 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:07:47.670351  392842 out.go:177] * Stopping node "ha-423356-m02"  ...
	I0419 20:07:47.671601  392842 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0419 20:07:47.671634  392842 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:07:47.671861  392842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0419 20:07:47.671885  392842 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:07:47.674765  392842 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:07:47.675212  392842 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:07:47.675244  392842 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:07:47.675345  392842 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:07:47.675513  392842 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:07:47.675680  392842 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:07:47.675807  392842 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:07:47.765338  392842 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0419 20:07:47.822646  392842 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0419 20:07:47.879266  392842 main.go:141] libmachine: Stopping "ha-423356-m02"...
	I0419 20:07:47.879304  392842 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:07:47.881271  392842 main.go:141] libmachine: (ha-423356-m02) Calling .Stop
	I0419 20:07:47.885207  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 0/120
	I0419 20:07:48.887204  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 1/120
	I0419 20:07:49.888796  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 2/120
	I0419 20:07:50.891191  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 3/120
	I0419 20:07:51.893495  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 4/120
	I0419 20:07:52.895488  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 5/120
	I0419 20:07:53.896978  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 6/120
	I0419 20:07:54.898292  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 7/120
	I0419 20:07:55.899895  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 8/120
	I0419 20:07:56.901583  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 9/120
	I0419 20:07:57.902892  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 10/120
	I0419 20:07:58.904143  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 11/120
	I0419 20:07:59.905407  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 12/120
	I0419 20:08:00.906983  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 13/120
	I0419 20:08:01.908476  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 14/120
	I0419 20:08:02.910662  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 15/120
	I0419 20:08:03.912115  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 16/120
	I0419 20:08:04.913664  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 17/120
	I0419 20:08:05.915368  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 18/120
	I0419 20:08:06.917041  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 19/120
	I0419 20:08:07.919202  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 20/120
	I0419 20:08:08.920979  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 21/120
	I0419 20:08:09.923063  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 22/120
	I0419 20:08:10.925109  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 23/120
	I0419 20:08:11.927341  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 24/120
	I0419 20:08:12.929256  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 25/120
	I0419 20:08:13.931160  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 26/120
	I0419 20:08:14.932698  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 27/120
	I0419 20:08:15.934098  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 28/120
	I0419 20:08:16.935554  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 29/120
	I0419 20:08:17.937667  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 30/120
	I0419 20:08:18.939639  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 31/120
	I0419 20:08:19.941011  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 32/120
	I0419 20:08:20.943147  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 33/120
	I0419 20:08:21.944663  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 34/120
	I0419 20:08:22.945969  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 35/120
	I0419 20:08:23.947543  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 36/120
	I0419 20:08:24.949112  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 37/120
	I0419 20:08:25.951548  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 38/120
	I0419 20:08:26.953235  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 39/120
	I0419 20:08:27.954727  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 40/120
	I0419 20:08:28.956273  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 41/120
	I0419 20:08:29.957539  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 42/120
	I0419 20:08:30.959311  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 43/120
	I0419 20:08:31.960695  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 44/120
	I0419 20:08:32.962916  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 45/120
	I0419 20:08:33.964193  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 46/120
	I0419 20:08:34.966330  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 47/120
	I0419 20:08:35.967726  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 48/120
	I0419 20:08:36.969066  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 49/120
	I0419 20:08:37.971352  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 50/120
	I0419 20:08:38.972827  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 51/120
	I0419 20:08:39.974234  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 52/120
	I0419 20:08:40.975453  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 53/120
	I0419 20:08:41.976798  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 54/120
	I0419 20:08:42.978878  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 55/120
	I0419 20:08:43.980408  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 56/120
	I0419 20:08:44.981848  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 57/120
	I0419 20:08:45.984180  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 58/120
	I0419 20:08:46.985935  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 59/120
	I0419 20:08:47.988174  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 60/120
	I0419 20:08:48.989778  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 61/120
	I0419 20:08:49.991547  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 62/120
	I0419 20:08:50.992779  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 63/120
	I0419 20:08:51.994364  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 64/120
	I0419 20:08:52.996055  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 65/120
	I0419 20:08:53.997423  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 66/120
	I0419 20:08:54.999782  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 67/120
	I0419 20:08:56.001395  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 68/120
	I0419 20:08:57.002866  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 69/120
	I0419 20:08:58.004723  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 70/120
	I0419 20:08:59.006346  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 71/120
	I0419 20:09:00.008188  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 72/120
	I0419 20:09:01.009528  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 73/120
	I0419 20:09:02.010851  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 74/120
	I0419 20:09:03.012850  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 75/120
	I0419 20:09:04.014592  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 76/120
	I0419 20:09:05.015915  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 77/120
	I0419 20:09:06.017346  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 78/120
	I0419 20:09:07.018697  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 79/120
	I0419 20:09:08.021009  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 80/120
	I0419 20:09:09.023319  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 81/120
	I0419 20:09:10.025408  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 82/120
	I0419 20:09:11.028058  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 83/120
	I0419 20:09:12.029343  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 84/120
	I0419 20:09:13.031300  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 85/120
	I0419 20:09:14.032536  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 86/120
	I0419 20:09:15.033780  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 87/120
	I0419 20:09:16.035284  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 88/120
	I0419 20:09:17.036585  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 89/120
	I0419 20:09:18.038846  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 90/120
	I0419 20:09:19.040308  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 91/120
	I0419 20:09:20.041605  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 92/120
	I0419 20:09:21.043532  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 93/120
	I0419 20:09:22.045115  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 94/120
	I0419 20:09:23.047160  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 95/120
	I0419 20:09:24.048877  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 96/120
	I0419 20:09:25.051128  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 97/120
	I0419 20:09:26.052450  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 98/120
	I0419 20:09:27.054019  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 99/120
	I0419 20:09:28.055871  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 100/120
	I0419 20:09:29.057383  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 101/120
	I0419 20:09:30.058737  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 102/120
	I0419 20:09:31.060239  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 103/120
	I0419 20:09:32.062005  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 104/120
	I0419 20:09:33.063663  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 105/120
	I0419 20:09:34.065880  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 106/120
	I0419 20:09:35.067317  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 107/120
	I0419 20:09:36.068686  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 108/120
	I0419 20:09:37.070339  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 109/120
	I0419 20:09:38.072445  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 110/120
	I0419 20:09:39.073725  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 111/120
	I0419 20:09:40.075511  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 112/120
	I0419 20:09:41.077869  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 113/120
	I0419 20:09:42.079338  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 114/120
	I0419 20:09:43.081447  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 115/120
	I0419 20:09:44.083588  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 116/120
	I0419 20:09:45.085567  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 117/120
	I0419 20:09:46.087352  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 118/120
	I0419 20:09:47.089096  392842 main.go:141] libmachine: (ha-423356-m02) Waiting for machine to stop 119/120
	I0419 20:09:48.089891  392842 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0419 20:09:48.090037  392842 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-423356 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
E0419 20:09:54.072828  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (19.245410807s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:09:48.149991  393284 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:09:48.150170  393284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:09:48.150180  393284 out.go:304] Setting ErrFile to fd 2...
	I0419 20:09:48.150186  393284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:09:48.150412  393284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:09:48.150614  393284 out.go:298] Setting JSON to false
	I0419 20:09:48.150649  393284 mustload.go:65] Loading cluster: ha-423356
	I0419 20:09:48.150761  393284 notify.go:220] Checking for updates...
	I0419 20:09:48.151108  393284 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:09:48.151130  393284 status.go:255] checking status of ha-423356 ...
	I0419 20:09:48.151517  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.151584  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.168540  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0419 20:09:48.169004  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.169689  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.169723  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.170040  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.170252  393284 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:09:48.172121  393284 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:09:48.172152  393284 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:09:48.172458  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.172501  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.187850  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0419 20:09:48.188232  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.188677  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.188697  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.188972  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.189160  393284 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:09:48.191867  393284 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:09:48.192410  393284 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:09:48.192439  393284 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:09:48.192585  393284 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:09:48.193004  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.193042  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.207959  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38347
	I0419 20:09:48.208327  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.208767  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.208802  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.209141  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.209357  393284 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:09:48.209550  393284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:09:48.209602  393284 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:09:48.212245  393284 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:09:48.212817  393284 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:09:48.212854  393284 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:09:48.212972  393284 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:09:48.213151  393284 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:09:48.213291  393284 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:09:48.213453  393284 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:09:48.295122  393284 ssh_runner.go:195] Run: systemctl --version
	I0419 20:09:48.302889  393284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:09:48.321391  393284 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:09:48.321422  393284 api_server.go:166] Checking apiserver status ...
	I0419 20:09:48.321463  393284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:09:48.339884  393284 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:09:48.352076  393284 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:09:48.352124  393284 ssh_runner.go:195] Run: ls
	I0419 20:09:48.357450  393284 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:09:48.363723  393284 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:09:48.363745  393284 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:09:48.363761  393284 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:09:48.363777  393284 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:09:48.364055  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.364088  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.380002  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0419 20:09:48.380400  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.380945  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.380966  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.381334  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.381530  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:09:48.383371  393284 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:09:48.383393  393284 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:09:48.383702  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.383745  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.398726  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0419 20:09:48.399153  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.399641  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.399674  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.399985  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.400167  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:09:48.402659  393284 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:09:48.403030  393284 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:09:48.403053  393284 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:09:48.403226  393284 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:09:48.403515  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:09:48.403551  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:09:48.418636  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0419 20:09:48.419066  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:09:48.419517  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:09:48.419546  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:09:48.419849  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:09:48.420021  393284 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:09:48.420184  393284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:09:48.420205  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:09:48.422637  393284 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:09:48.422983  393284 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:09:48.423007  393284 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:09:48.423113  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:09:48.423292  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:09:48.423443  393284 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:09:48.423602  393284 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:06.948843  393284 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:06.948962  393284 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:06.948988  393284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:06.949002  393284 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:06.949026  393284 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:06.949036  393284 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:06.949638  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:06.949701  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:06.965177  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0419 20:10:06.965624  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:06.966081  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:06.966103  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:06.966422  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:06.966601  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:06.968366  393284 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:06.968384  393284 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:06.968725  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:06.968767  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:06.983810  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0419 20:10:06.984336  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:06.984840  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:06.984867  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:06.985183  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:06.985377  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:06.988032  393284 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:06.988557  393284 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:06.988574  393284 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:06.988759  393284 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:06.989067  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:06.989107  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:07.005570  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0419 20:10:07.006026  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:07.006491  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:07.006508  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:07.006821  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:07.007038  393284 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:07.007213  393284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:07.007236  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:07.010269  393284 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:07.010788  393284 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:07.010830  393284 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:07.010946  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:07.011158  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:07.011325  393284 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:07.011493  393284 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:07.104331  393284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:07.122812  393284 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:07.122845  393284 api_server.go:166] Checking apiserver status ...
	I0419 20:10:07.122879  393284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:07.140650  393284 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:07.149948  393284 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:07.150005  393284 ssh_runner.go:195] Run: ls
	I0419 20:10:07.155082  393284 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:07.161588  393284 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:07.161611  393284 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:07.161620  393284 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:07.161636  393284 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:07.161986  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:07.162024  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:07.177494  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0419 20:10:07.178013  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:07.178540  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:07.178560  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:07.178954  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:07.179143  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:07.180673  393284 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:07.180691  393284 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:07.180962  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:07.181001  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:07.195862  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0419 20:10:07.196311  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:07.196815  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:07.196836  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:07.197194  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:07.197393  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:07.200376  393284 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:07.200827  393284 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:07.200857  393284 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:07.201013  393284 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:07.201412  393284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:07.201454  393284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:07.217344  393284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0419 20:10:07.217831  393284 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:07.218371  393284 main.go:141] libmachine: Using API Version  1
	I0419 20:10:07.218391  393284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:07.218726  393284 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:07.218963  393284 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:07.219179  393284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:07.219204  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:07.222451  393284 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:07.222942  393284 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:07.222976  393284 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:07.223110  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:07.223308  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:07.223465  393284 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:07.223648  393284 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:07.314053  393284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:07.334914  393284 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-423356 -n ha-423356
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-423356 logs -n 25: (1.520114269s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m03_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m04 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp testdata/cp-test.txt                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m04_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03:/home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m03 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-423356 node stop m02 -v=7                                                     | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:03:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:03:02.845033  388805 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:03:02.845273  388805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:03:02.845282  388805 out.go:304] Setting ErrFile to fd 2...
	I0419 20:03:02.845286  388805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:03:02.845488  388805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:03:02.846074  388805 out.go:298] Setting JSON to false
	I0419 20:03:02.847027  388805 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6329,"bootTime":1713550654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:03:02.847103  388805 start.go:139] virtualization: kvm guest
	I0419 20:03:02.849294  388805 out.go:177] * [ha-423356] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:03:02.850788  388805 notify.go:220] Checking for updates...
	I0419 20:03:02.850799  388805 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:03:02.852415  388805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:03:02.854180  388805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:02.855527  388805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:02.856730  388805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:03:02.858102  388805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:03:02.859530  388805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:03:02.895033  388805 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 20:03:02.896430  388805 start.go:297] selected driver: kvm2
	I0419 20:03:02.896441  388805 start.go:901] validating driver "kvm2" against <nil>
	I0419 20:03:02.896454  388805 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:03:02.897175  388805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:03:02.897263  388805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:03:02.912832  388805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:03:02.912885  388805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 20:03:02.913116  388805 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:03:02.913190  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:02.913202  388805 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 20:03:02.913207  388805 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 20:03:02.913266  388805 start.go:340] cluster config:
	{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0419 20:03:02.913370  388805 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:03:02.915373  388805 out.go:177] * Starting "ha-423356" primary control-plane node in "ha-423356" cluster
	I0419 20:03:02.916990  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:02.917035  388805 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:03:02.917046  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:03:02.917164  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:03:02.917178  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:03:02.917469  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:02.917491  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json: {Name:mk412b5f97f86b0ffa73cd379f7e787167939ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:02.917655  388805 start.go:360] acquireMachinesLock for ha-423356: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:03:02.917692  388805 start.go:364] duration metric: took 18.288µs to acquireMachinesLock for "ha-423356"
	I0419 20:03:02.917717  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:02.917816  388805 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 20:03:02.919511  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:03:02.919654  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:02.919707  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:02.934351  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0419 20:03:02.934822  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:02.935463  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:02.935488  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:02.935946  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:02.936157  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:02.936332  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:02.936480  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:03:02.936505  388805 client.go:168] LocalClient.Create starting
	I0419 20:03:02.936531  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:03:02.936569  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:02.936587  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:02.936673  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:03:02.936699  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:02.936714  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:02.936737  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:03:02.936745  388805 main.go:141] libmachine: (ha-423356) Calling .PreCreateCheck
	I0419 20:03:02.937106  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:02.937505  388805 main.go:141] libmachine: Creating machine...
	I0419 20:03:02.937518  388805 main.go:141] libmachine: (ha-423356) Calling .Create
	I0419 20:03:02.937653  388805 main.go:141] libmachine: (ha-423356) Creating KVM machine...
	I0419 20:03:02.938938  388805 main.go:141] libmachine: (ha-423356) DBG | found existing default KVM network
	I0419 20:03:02.939688  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:02.939546  388829 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0419 20:03:02.939722  388805 main.go:141] libmachine: (ha-423356) DBG | created network xml: 
	I0419 20:03:02.939742  388805 main.go:141] libmachine: (ha-423356) DBG | <network>
	I0419 20:03:02.939824  388805 main.go:141] libmachine: (ha-423356) DBG |   <name>mk-ha-423356</name>
	I0419 20:03:02.939849  388805 main.go:141] libmachine: (ha-423356) DBG |   <dns enable='no'/>
	I0419 20:03:02.939860  388805 main.go:141] libmachine: (ha-423356) DBG |   
	I0419 20:03:02.939874  388805 main.go:141] libmachine: (ha-423356) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0419 20:03:02.939884  388805 main.go:141] libmachine: (ha-423356) DBG |     <dhcp>
	I0419 20:03:02.939894  388805 main.go:141] libmachine: (ha-423356) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0419 20:03:02.939913  388805 main.go:141] libmachine: (ha-423356) DBG |     </dhcp>
	I0419 20:03:02.939932  388805 main.go:141] libmachine: (ha-423356) DBG |   </ip>
	I0419 20:03:02.939945  388805 main.go:141] libmachine: (ha-423356) DBG |   
	I0419 20:03:02.939960  388805 main.go:141] libmachine: (ha-423356) DBG | </network>
	I0419 20:03:02.939994  388805 main.go:141] libmachine: (ha-423356) DBG | 
	I0419 20:03:02.945195  388805 main.go:141] libmachine: (ha-423356) DBG | trying to create private KVM network mk-ha-423356 192.168.39.0/24...
	I0419 20:03:03.017485  388805 main.go:141] libmachine: (ha-423356) DBG | private KVM network mk-ha-423356 192.168.39.0/24 created
	I0419 20:03:03.017520  388805 main.go:141] libmachine: (ha-423356) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 ...
	I0419 20:03:03.017531  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.017389  388829 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:03.017545  388805 main.go:141] libmachine: (ha-423356) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:03:03.017771  388805 main.go:141] libmachine: (ha-423356) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:03:03.264638  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.264500  388829 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa...
	I0419 20:03:03.381449  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.381305  388829 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/ha-423356.rawdisk...
	I0419 20:03:03.381484  388805 main.go:141] libmachine: (ha-423356) DBG | Writing magic tar header
	I0419 20:03:03.381503  388805 main.go:141] libmachine: (ha-423356) DBG | Writing SSH key tar header
	I0419 20:03:03.381515  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.381422  388829 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 ...
	I0419 20:03:03.381529  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356
	I0419 20:03:03.381595  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 (perms=drwx------)
	I0419 20:03:03.381620  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:03:03.381636  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:03:03.381663  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:03:03.381670  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:03:03.381679  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:03.381689  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:03:03.381702  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:03:03.381711  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:03:03.381726  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:03:03.381732  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home
	I0419 20:03:03.381740  388805 main.go:141] libmachine: (ha-423356) DBG | Skipping /home - not owner
	I0419 20:03:03.381748  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:03:03.381755  388805 main.go:141] libmachine: (ha-423356) Creating domain...
	I0419 20:03:03.382893  388805 main.go:141] libmachine: (ha-423356) define libvirt domain using xml: 
	I0419 20:03:03.382916  388805 main.go:141] libmachine: (ha-423356) <domain type='kvm'>
	I0419 20:03:03.382923  388805 main.go:141] libmachine: (ha-423356)   <name>ha-423356</name>
	I0419 20:03:03.382929  388805 main.go:141] libmachine: (ha-423356)   <memory unit='MiB'>2200</memory>
	I0419 20:03:03.382934  388805 main.go:141] libmachine: (ha-423356)   <vcpu>2</vcpu>
	I0419 20:03:03.382938  388805 main.go:141] libmachine: (ha-423356)   <features>
	I0419 20:03:03.382943  388805 main.go:141] libmachine: (ha-423356)     <acpi/>
	I0419 20:03:03.382950  388805 main.go:141] libmachine: (ha-423356)     <apic/>
	I0419 20:03:03.382955  388805 main.go:141] libmachine: (ha-423356)     <pae/>
	I0419 20:03:03.382967  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.382975  388805 main.go:141] libmachine: (ha-423356)   </features>
	I0419 20:03:03.382980  388805 main.go:141] libmachine: (ha-423356)   <cpu mode='host-passthrough'>
	I0419 20:03:03.382987  388805 main.go:141] libmachine: (ha-423356)   
	I0419 20:03:03.382992  388805 main.go:141] libmachine: (ha-423356)   </cpu>
	I0419 20:03:03.382997  388805 main.go:141] libmachine: (ha-423356)   <os>
	I0419 20:03:03.383002  388805 main.go:141] libmachine: (ha-423356)     <type>hvm</type>
	I0419 20:03:03.383010  388805 main.go:141] libmachine: (ha-423356)     <boot dev='cdrom'/>
	I0419 20:03:03.383016  388805 main.go:141] libmachine: (ha-423356)     <boot dev='hd'/>
	I0419 20:03:03.383090  388805 main.go:141] libmachine: (ha-423356)     <bootmenu enable='no'/>
	I0419 20:03:03.383121  388805 main.go:141] libmachine: (ha-423356)   </os>
	I0419 20:03:03.383132  388805 main.go:141] libmachine: (ha-423356)   <devices>
	I0419 20:03:03.383143  388805 main.go:141] libmachine: (ha-423356)     <disk type='file' device='cdrom'>
	I0419 20:03:03.383160  388805 main.go:141] libmachine: (ha-423356)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/boot2docker.iso'/>
	I0419 20:03:03.383173  388805 main.go:141] libmachine: (ha-423356)       <target dev='hdc' bus='scsi'/>
	I0419 20:03:03.383184  388805 main.go:141] libmachine: (ha-423356)       <readonly/>
	I0419 20:03:03.383192  388805 main.go:141] libmachine: (ha-423356)     </disk>
	I0419 20:03:03.383210  388805 main.go:141] libmachine: (ha-423356)     <disk type='file' device='disk'>
	I0419 20:03:03.383231  388805 main.go:141] libmachine: (ha-423356)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:03:03.383248  388805 main.go:141] libmachine: (ha-423356)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/ha-423356.rawdisk'/>
	I0419 20:03:03.383259  388805 main.go:141] libmachine: (ha-423356)       <target dev='hda' bus='virtio'/>
	I0419 20:03:03.383270  388805 main.go:141] libmachine: (ha-423356)     </disk>
	I0419 20:03:03.383278  388805 main.go:141] libmachine: (ha-423356)     <interface type='network'>
	I0419 20:03:03.383285  388805 main.go:141] libmachine: (ha-423356)       <source network='mk-ha-423356'/>
	I0419 20:03:03.383296  388805 main.go:141] libmachine: (ha-423356)       <model type='virtio'/>
	I0419 20:03:03.383315  388805 main.go:141] libmachine: (ha-423356)     </interface>
	I0419 20:03:03.383333  388805 main.go:141] libmachine: (ha-423356)     <interface type='network'>
	I0419 20:03:03.383345  388805 main.go:141] libmachine: (ha-423356)       <source network='default'/>
	I0419 20:03:03.383350  388805 main.go:141] libmachine: (ha-423356)       <model type='virtio'/>
	I0419 20:03:03.383358  388805 main.go:141] libmachine: (ha-423356)     </interface>
	I0419 20:03:03.383364  388805 main.go:141] libmachine: (ha-423356)     <serial type='pty'>
	I0419 20:03:03.383371  388805 main.go:141] libmachine: (ha-423356)       <target port='0'/>
	I0419 20:03:03.383376  388805 main.go:141] libmachine: (ha-423356)     </serial>
	I0419 20:03:03.383383  388805 main.go:141] libmachine: (ha-423356)     <console type='pty'>
	I0419 20:03:03.383389  388805 main.go:141] libmachine: (ha-423356)       <target type='serial' port='0'/>
	I0419 20:03:03.383394  388805 main.go:141] libmachine: (ha-423356)     </console>
	I0419 20:03:03.383399  388805 main.go:141] libmachine: (ha-423356)     <rng model='virtio'>
	I0419 20:03:03.383408  388805 main.go:141] libmachine: (ha-423356)       <backend model='random'>/dev/random</backend>
	I0419 20:03:03.383415  388805 main.go:141] libmachine: (ha-423356)     </rng>
	I0419 20:03:03.383435  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.383451  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.383482  388805 main.go:141] libmachine: (ha-423356)   </devices>
	I0419 20:03:03.383513  388805 main.go:141] libmachine: (ha-423356) </domain>
	I0419 20:03:03.383528  388805 main.go:141] libmachine: (ha-423356) 
	I0419 20:03:03.387835  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:f6:54:bb in network default
	I0419 20:03:03.388422  388805 main.go:141] libmachine: (ha-423356) Ensuring networks are active...
	I0419 20:03:03.388449  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:03.389092  388805 main.go:141] libmachine: (ha-423356) Ensuring network default is active
	I0419 20:03:03.389474  388805 main.go:141] libmachine: (ha-423356) Ensuring network mk-ha-423356 is active
	I0419 20:03:03.390134  388805 main.go:141] libmachine: (ha-423356) Getting domain xml...
	I0419 20:03:03.390802  388805 main.go:141] libmachine: (ha-423356) Creating domain...
	I0419 20:03:04.577031  388805 main.go:141] libmachine: (ha-423356) Waiting to get IP...
	I0419 20:03:04.577798  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:04.578185  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:04.578209  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:04.578156  388829 retry.go:31] will retry after 210.348795ms: waiting for machine to come up
	I0419 20:03:04.790567  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:04.790982  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:04.791004  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:04.790929  388829 retry.go:31] will retry after 255.069257ms: waiting for machine to come up
	I0419 20:03:05.047393  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.047985  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.048013  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.047920  388829 retry.go:31] will retry after 326.769699ms: waiting for machine to come up
	I0419 20:03:05.376549  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.377013  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.377065  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.376974  388829 retry.go:31] will retry after 598.145851ms: waiting for machine to come up
	I0419 20:03:05.978098  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.978525  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.978554  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.978473  388829 retry.go:31] will retry after 554.446944ms: waiting for machine to come up
	I0419 20:03:06.534185  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:06.534587  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:06.534623  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:06.534531  388829 retry.go:31] will retry after 799.56022ms: waiting for machine to come up
	I0419 20:03:07.335546  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:07.336009  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:07.336047  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:07.335960  388829 retry.go:31] will retry after 879.93969ms: waiting for machine to come up
	I0419 20:03:08.217737  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:08.218181  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:08.218213  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:08.218117  388829 retry.go:31] will retry after 957.891913ms: waiting for machine to come up
	I0419 20:03:09.177275  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:09.177702  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:09.177730  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:09.177617  388829 retry.go:31] will retry after 1.611056854s: waiting for machine to come up
	I0419 20:03:10.791345  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:10.791761  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:10.791787  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:10.791715  388829 retry.go:31] will retry after 1.559858168s: waiting for machine to come up
	I0419 20:03:12.353627  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:12.354099  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:12.354127  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:12.354051  388829 retry.go:31] will retry after 2.452370558s: waiting for machine to come up
	I0419 20:03:14.808552  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:14.808997  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:14.809032  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:14.808931  388829 retry.go:31] will retry after 2.373368989s: waiting for machine to come up
	I0419 20:03:17.185465  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:17.185857  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:17.185879  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:17.185802  388829 retry.go:31] will retry after 2.994584556s: waiting for machine to come up
	I0419 20:03:20.181568  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:20.182034  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:20.182060  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:20.181998  388829 retry.go:31] will retry after 5.268532534s: waiting for machine to come up
	I0419 20:03:25.453727  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.454172  388805 main.go:141] libmachine: (ha-423356) Found IP for machine: 192.168.39.7
	I0419 20:03:25.454188  388805 main.go:141] libmachine: (ha-423356) Reserving static IP address...
	I0419 20:03:25.454241  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has current primary IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.454630  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find host DHCP lease matching {name: "ha-423356", mac: "52:54:00:aa:25:62", ip: "192.168.39.7"} in network mk-ha-423356
	I0419 20:03:25.527744  388805 main.go:141] libmachine: (ha-423356) DBG | Getting to WaitForSSH function...
	I0419 20:03:25.527781  388805 main.go:141] libmachine: (ha-423356) Reserved static IP address: 192.168.39.7
	I0419 20:03:25.527795  388805 main.go:141] libmachine: (ha-423356) Waiting for SSH to be available...
	I0419 20:03:25.530520  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.530951  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.530973  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.531128  388805 main.go:141] libmachine: (ha-423356) DBG | Using SSH client type: external
	I0419 20:03:25.531152  388805 main.go:141] libmachine: (ha-423356) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa (-rw-------)
	I0419 20:03:25.531219  388805 main.go:141] libmachine: (ha-423356) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:03:25.531240  388805 main.go:141] libmachine: (ha-423356) DBG | About to run SSH command:
	I0419 20:03:25.531252  388805 main.go:141] libmachine: (ha-423356) DBG | exit 0
	I0419 20:03:25.656437  388805 main.go:141] libmachine: (ha-423356) DBG | SSH cmd err, output: <nil>: 
	I0419 20:03:25.656782  388805 main.go:141] libmachine: (ha-423356) KVM machine creation complete!
	I0419 20:03:25.657149  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:25.657693  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:25.657925  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:25.658087  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:03:25.658103  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:25.659437  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:03:25.659451  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:03:25.659457  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:03:25.659463  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.661516  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.661851  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.661884  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.662045  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.662248  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.662418  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.662549  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.662715  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.662998  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.663013  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:03:25.764216  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:03:25.764240  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:03:25.764248  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.766861  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.767236  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.767266  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.767407  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.767654  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.767798  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.767958  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.768108  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.768299  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.768315  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:03:25.869494  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:03:25.869590  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:03:25.869619  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:03:25.869635  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:25.869913  388805 buildroot.go:166] provisioning hostname "ha-423356"
	I0419 20:03:25.869943  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:25.870181  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.872906  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.873302  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.873347  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.873609  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.873801  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.873940  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.874129  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.874374  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.874580  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.874594  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356 && echo "ha-423356" | sudo tee /etc/hostname
	I0419 20:03:25.988738  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:03:25.988774  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.991681  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.992038  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.992076  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.992284  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.992502  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.992677  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.992810  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.992969  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.993214  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.993243  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:03:26.102598  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:03:26.102630  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:03:26.102692  388805 buildroot.go:174] setting up certificates
	I0419 20:03:26.102708  388805 provision.go:84] configureAuth start
	I0419 20:03:26.102720  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:26.103049  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.105657  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.105970  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.105996  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.106174  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.108069  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.108385  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.108411  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.108679  388805 provision.go:143] copyHostCerts
	I0419 20:03:26.108713  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:03:26.108747  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:03:26.108755  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:03:26.108827  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:03:26.108902  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:03:26.108920  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:03:26.108925  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:03:26.108947  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:03:26.108998  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:03:26.109015  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:03:26.109021  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:03:26.109040  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:03:26.109091  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356 san=[127.0.0.1 192.168.39.7 ha-423356 localhost minikube]
	I0419 20:03:26.243241  388805 provision.go:177] copyRemoteCerts
	I0419 20:03:26.243311  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:03:26.243343  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.246005  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.246368  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.246399  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.246581  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.246759  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.246897  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.247067  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.329364  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:03:26.329433  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:03:26.356496  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:03:26.356592  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0419 20:03:26.383149  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:03:26.383227  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:03:26.409669  388805 provision.go:87] duration metric: took 306.947778ms to configureAuth
	I0419 20:03:26.409703  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:03:26.409899  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:26.409990  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.412507  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.412886  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.412916  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.413071  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.413258  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.413505  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.413685  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.413880  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:26.414040  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:26.414056  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:03:26.667960  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:03:26.668002  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:03:26.668014  388805 main.go:141] libmachine: (ha-423356) Calling .GetURL
	I0419 20:03:26.669354  388805 main.go:141] libmachine: (ha-423356) DBG | Using libvirt version 6000000
	I0419 20:03:26.671168  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.671463  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.671494  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.671606  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:03:26.671619  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:03:26.671637  388805 client.go:171] duration metric: took 23.735114952s to LocalClient.Create
	I0419 20:03:26.671669  388805 start.go:167] duration metric: took 23.735189159s to libmachine.API.Create "ha-423356"
	I0419 20:03:26.671683  388805 start.go:293] postStartSetup for "ha-423356" (driver="kvm2")
	I0419 20:03:26.671697  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:03:26.671722  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.671982  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:03:26.672004  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.673889  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.674176  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.674199  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.674325  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.674507  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.674654  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.674801  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.755868  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:03:26.760363  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:03:26.760391  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:03:26.760463  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:03:26.760584  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:03:26.760597  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:03:26.760760  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:03:26.770955  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:03:26.795442  388805 start.go:296] duration metric: took 123.744054ms for postStartSetup
	I0419 20:03:26.795493  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:26.796118  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.798439  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.798783  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.798813  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.799060  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:26.799273  388805 start.go:128] duration metric: took 23.881442783s to createHost
	I0419 20:03:26.799304  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.801223  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.801563  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.801604  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.801701  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.801920  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.802096  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.802206  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.802326  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:26.802483  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:26.802498  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:03:26.901494  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557006.870713224
	
	I0419 20:03:26.901520  388805 fix.go:216] guest clock: 1713557006.870713224
	I0419 20:03:26.901528  388805 fix.go:229] Guest: 2024-04-19 20:03:26.870713224 +0000 UTC Remote: 2024-04-19 20:03:26.799288765 +0000 UTC m=+24.008813931 (delta=71.424459ms)
	I0419 20:03:26.901548  388805 fix.go:200] guest clock delta is within tolerance: 71.424459ms
	I0419 20:03:26.901553  388805 start.go:83] releasing machines lock for "ha-423356", held for 23.983850205s
	I0419 20:03:26.901571  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.901828  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.904520  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.904871  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.904901  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.905039  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905518  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905706  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905778  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:03:26.905836  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.905960  388805 ssh_runner.go:195] Run: cat /version.json
	I0419 20:03:26.905982  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.908813  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.908839  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909187  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.909216  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909247  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.909263  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909340  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.909512  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.909591  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.909673  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.909737  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.909796  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.909839  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.909968  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:27.016706  388805 ssh_runner.go:195] Run: systemctl --version
	I0419 20:03:27.023021  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:03:27.191643  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:03:27.198203  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:03:27.198270  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:03:27.215795  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:03:27.215821  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:03:27.215889  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:03:27.233540  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:03:27.247728  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:03:27.247781  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:03:27.261951  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:03:27.277027  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:03:27.398569  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:03:27.546683  388805 docker.go:233] disabling docker service ...
	I0419 20:03:27.546766  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:03:27.562620  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:03:27.576030  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:03:27.717594  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:03:27.854194  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:03:27.868446  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:03:27.887617  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:03:27.887707  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.898351  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:03:27.898419  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.908980  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.919914  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.930995  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:03:27.942383  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.953171  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.970421  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.981291  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:03:27.991035  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:03:27.991102  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:03:28.004245  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:03:28.014484  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:03:28.140460  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:03:28.279674  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:03:28.279758  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:03:28.284871  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:03:28.284929  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:03:28.288932  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:03:28.328993  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:03:28.329087  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:03:28.363050  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:03:28.399306  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:03:28.400656  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:28.403203  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:28.403527  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:28.403556  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:28.403768  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:03:28.408153  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:03:28.422190  388805 kubeadm.go:877] updating cluster {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:03:28.422298  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:28.422341  388805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:03:28.456075  388805 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 20:03:28.456140  388805 ssh_runner.go:195] Run: which lz4
	I0419 20:03:28.460179  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0419 20:03:28.460272  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 20:03:28.464624  388805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 20:03:28.464667  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0419 20:03:29.977927  388805 crio.go:462] duration metric: took 1.517679112s to copy over tarball
	I0419 20:03:29.978041  388805 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 20:03:32.178372  388805 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200298609s)
	I0419 20:03:32.178401  388805 crio.go:469] duration metric: took 2.200430945s to extract the tarball
	I0419 20:03:32.178411  388805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 20:03:32.215944  388805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:03:32.264481  388805 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:03:32.264509  388805 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:03:32.264517  388805 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.30.0 crio true true} ...
	I0419 20:03:32.264624  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:03:32.264709  388805 ssh_runner.go:195] Run: crio config
	I0419 20:03:32.310485  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:32.310508  388805 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 20:03:32.310520  388805 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:03:32.310548  388805 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423356 NodeName:ha-423356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:03:32.310714  388805 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:03:32.310737  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:03:32.310776  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:03:32.330065  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:03:32.330225  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:03:32.330289  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:03:32.341007  388805 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:03:32.341086  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 20:03:32.351073  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0419 20:03:32.368450  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:03:32.385658  388805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0419 20:03:32.402827  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0419 20:03:32.420334  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:03:32.424256  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:03:32.437379  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:03:32.571957  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:03:32.590774  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.7
	I0419 20:03:32.590799  388805 certs.go:194] generating shared ca certs ...
	I0419 20:03:32.590816  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.590980  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:03:32.591038  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:03:32.591054  388805 certs.go:256] generating profile certs ...
	I0419 20:03:32.591113  388805 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:03:32.591128  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt with IP's: []
	I0419 20:03:32.723601  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt ...
	I0419 20:03:32.723629  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt: {Name:mk1bd2547d29de1d78dafadecadc8f6efc913cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.723795  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key ...
	I0419 20:03:32.723806  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key: {Name:mk1478e712eb8f185eb76d47c3f87d2afed17914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.723899  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b
	I0419 20:03:32.723920  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.254]
	I0419 20:03:32.847857  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b ...
	I0419 20:03:32.847890  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b: {Name:mk6196b7f125d4557863fc7da4b5e249cdadf91a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.848067  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b ...
	I0419 20:03:32.848086  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b: {Name:mk123597a78a2e3d0fb518f916030db99d125560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.848178  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:03:32.848308  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:03:32.848394  388805 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:03:32.848418  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt with IP's: []
	I0419 20:03:33.165191  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt ...
	I0419 20:03:33.165225  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt: {Name:mk42c5a4581b58f03d988ba5fb49cc746e3616fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:33.165379  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key ...
	I0419 20:03:33.165390  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key: {Name:mk5b2f3debab93ffe0190a67aac9b6bb8ea9000e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:33.165453  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:03:33.165470  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:03:33.165489  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:03:33.165502  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:03:33.165515  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:03:33.165532  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:03:33.165544  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:03:33.165555  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:03:33.165601  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:03:33.165668  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:03:33.165682  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:03:33.165702  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:03:33.165723  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:03:33.165740  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:03:33.165787  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:03:33.165819  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.165834  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.165846  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.166431  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:03:33.191562  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:03:33.216011  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:03:33.241838  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:03:33.268607  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 20:03:33.295119  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 20:03:33.321459  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:03:33.348313  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:03:33.385201  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:03:33.413507  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:03:33.441390  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:03:33.465663  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:03:33.482967  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:03:33.488869  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:03:33.500539  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.505625  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.505697  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.511579  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:03:33.523329  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:03:33.535160  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.539864  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.539930  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.545617  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:03:33.557185  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:03:33.568618  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.573237  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.573285  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.579041  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:03:33.590796  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:03:33.595490  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:03:33.595555  388805 kubeadm.go:391] StartCluster: {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:03:33.595644  388805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:03:33.595713  388805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:03:33.633360  388805 cri.go:89] found id: ""
	I0419 20:03:33.633455  388805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 20:03:33.644229  388805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 20:03:33.654631  388805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 20:03:33.664977  388805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 20:03:33.665000  388805 kubeadm.go:156] found existing configuration files:
	
	I0419 20:03:33.665066  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 20:03:33.674806  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 20:03:33.674872  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 20:03:33.684963  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 20:03:33.694505  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 20:03:33.694568  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 20:03:33.704677  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 20:03:33.714350  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 20:03:33.714418  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 20:03:33.724611  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 20:03:33.734639  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 20:03:33.734729  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 20:03:33.745192  388805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 20:03:33.847901  388805 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 20:03:33.848005  388805 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 20:03:33.973751  388805 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 20:03:33.973883  388805 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 20:03:33.974017  388805 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 20:03:34.222167  388805 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 20:03:34.340982  388805 out.go:204]   - Generating certificates and keys ...
	I0419 20:03:34.341096  388805 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 20:03:34.341176  388805 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 20:03:34.400334  388805 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 20:03:34.535679  388805 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 20:03:34.664392  388805 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 20:03:34.789170  388805 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 20:03:35.056390  388805 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 20:03:35.056568  388805 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-423356 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0419 20:03:35.117701  388805 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 20:03:35.117850  388805 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-423356 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0419 20:03:35.286285  388805 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 20:03:35.440213  388805 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 20:03:35.647864  388805 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 20:03:35.648137  388805 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 20:03:35.822817  388805 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 20:03:36.067883  388805 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 20:03:36.377307  388805 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 20:03:36.527837  388805 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 20:03:36.808888  388805 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 20:03:36.809489  388805 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 20:03:36.812492  388805 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 20:03:36.814740  388805 out.go:204]   - Booting up control plane ...
	I0419 20:03:36.814931  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 20:03:36.815078  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 20:03:36.815206  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 20:03:36.836621  388805 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 20:03:36.837036  388805 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 20:03:36.837097  388805 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 20:03:36.976652  388805 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 20:03:36.976785  388805 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 20:03:37.978129  388805 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002406944s
	I0419 20:03:37.978214  388805 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 20:03:43.923411  388805 kubeadm.go:309] [api-check] The API server is healthy after 5.949629263s
	I0419 20:03:43.936942  388805 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 20:03:43.953542  388805 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 20:03:43.981212  388805 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 20:03:43.981467  388805 kubeadm.go:309] [mark-control-plane] Marking the node ha-423356 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 20:03:43.994804  388805 kubeadm.go:309] [bootstrap-token] Using token: awd3b6.qij36bhfjtodtmhg
	I0419 20:03:43.996377  388805 out.go:204]   - Configuring RBAC rules ...
	I0419 20:03:43.996503  388805 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 20:03:44.001330  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 20:03:44.009538  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 20:03:44.013911  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 20:03:44.020585  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 20:03:44.027243  388805 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 20:03:44.330976  388805 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 20:03:44.765900  388805 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 20:03:45.331489  388805 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 20:03:45.332459  388805 kubeadm.go:309] 
	I0419 20:03:45.332576  388805 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 20:03:45.332597  388805 kubeadm.go:309] 
	I0419 20:03:45.332707  388805 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 20:03:45.332728  388805 kubeadm.go:309] 
	I0419 20:03:45.332770  388805 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 20:03:45.332856  388805 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 20:03:45.332935  388805 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 20:03:45.333017  388805 kubeadm.go:309] 
	I0419 20:03:45.333112  388805 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 20:03:45.333123  388805 kubeadm.go:309] 
	I0419 20:03:45.333220  388805 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 20:03:45.333230  388805 kubeadm.go:309] 
	I0419 20:03:45.333405  388805 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 20:03:45.333546  388805 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 20:03:45.333649  388805 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 20:03:45.333659  388805 kubeadm.go:309] 
	I0419 20:03:45.333763  388805 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 20:03:45.333876  388805 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 20:03:45.333898  388805 kubeadm.go:309] 
	I0419 20:03:45.334005  388805 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token awd3b6.qij36bhfjtodtmhg \
	I0419 20:03:45.334149  388805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea \
	I0419 20:03:45.334184  388805 kubeadm.go:309] 	--control-plane 
	I0419 20:03:45.334188  388805 kubeadm.go:309] 
	I0419 20:03:45.334306  388805 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 20:03:45.334322  388805 kubeadm.go:309] 
	I0419 20:03:45.334461  388805 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token awd3b6.qij36bhfjtodtmhg \
	I0419 20:03:45.334597  388805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea 
	I0419 20:03:45.335016  388805 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 20:03:45.335039  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:45.335045  388805 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 20:03:45.337136  388805 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 20:03:45.338560  388805 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 20:03:45.344280  388805 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 20:03:45.344296  388805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 20:03:45.364073  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 20:03:45.698100  388805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 20:03:45.698155  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:45.698178  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356 minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=true
	I0419 20:03:45.731774  388805 ops.go:34] apiserver oom_adj: -16
	I0419 20:03:45.891906  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:46.392606  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:46.892887  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:47.392996  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:47.892402  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:48.392811  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:48.892887  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:49.392218  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:49.892060  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:50.392411  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:50.892506  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:51.392771  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:51.892571  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:52.392019  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:52.892863  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:53.392557  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:53.892046  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:54.392147  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:54.892479  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:55.392551  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:55.892291  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:56.392509  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:56.892103  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:57.392112  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:57.519848  388805 kubeadm.go:1107] duration metric: took 11.82176059s to wait for elevateKubeSystemPrivileges
	W0419 20:03:57.519891  388805 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 20:03:57.519904  388805 kubeadm.go:393] duration metric: took 23.924352566s to StartCluster
	I0419 20:03:57.519957  388805 settings.go:142] acquiring lock: {Name:mk4d89c3e562693d551452a3da7ca47ff322d54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:57.520065  388805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:57.520932  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/kubeconfig: {Name:mk754e069328c06a767f4b9e66452a46be84b49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:57.521167  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 20:03:57.521185  388805 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:57.521220  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:03:57.521226  388805 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 20:03:57.521307  388805 addons.go:69] Setting storage-provisioner=true in profile "ha-423356"
	I0419 20:03:57.521311  388805 addons.go:69] Setting default-storageclass=true in profile "ha-423356"
	I0419 20:03:57.521360  388805 addons.go:234] Setting addon storage-provisioner=true in "ha-423356"
	I0419 20:03:57.521375  388805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-423356"
	I0419 20:03:57.521396  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:03:57.521441  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:57.521810  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.521845  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.521851  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.521895  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.537376  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0419 20:03:57.537391  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0419 20:03:57.537956  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.538029  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.538456  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.538477  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.538613  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.538641  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.538826  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.538976  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.539141  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.539375  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.539408  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.541320  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:57.541691  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 20:03:57.542321  388805 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 20:03:57.542614  388805 addons.go:234] Setting addon default-storageclass=true in "ha-423356"
	I0419 20:03:57.542664  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:03:57.543048  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.543082  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.555711  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0419 20:03:57.556204  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.556760  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.556803  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.557262  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.557536  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.558692  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0419 20:03:57.559305  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.559398  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:57.561650  388805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:03:57.559926  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.562849  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.562963  388805 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 20:03:57.562980  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 20:03:57.562997  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:57.563264  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.563810  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.563863  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.566406  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.566855  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:57.566893  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.567012  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:57.567206  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:57.567375  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:57.567554  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:57.579256  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0419 20:03:57.579707  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.580185  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.580208  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.580578  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.580773  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.582416  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:57.582692  388805 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 20:03:57.582709  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 20:03:57.582729  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:57.585565  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.585964  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:57.585991  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.586245  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:57.586426  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:57.586607  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:57.586739  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:57.666618  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 20:03:57.740900  388805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 20:03:57.770575  388805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 20:03:58.177522  388805 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0419 20:03:58.430908  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.430935  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.430979  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431008  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431256  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431274  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431283  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431291  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431300  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431302  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431326  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431339  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431347  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431358  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431568  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431583  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431627  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431654  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431665  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431714  388805 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0419 20:03:58.431733  388805 round_trippers.go:469] Request Headers:
	I0419 20:03:58.431744  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:03:58.431752  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:03:58.444053  388805 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 20:03:58.444656  388805 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 20:03:58.444676  388805 round_trippers.go:469] Request Headers:
	I0419 20:03:58.444691  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:03:58.444695  388805 round_trippers.go:473]     Content-Type: application/json
	I0419 20:03:58.444698  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:03:58.453924  388805 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 20:03:58.454109  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.454123  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.454460  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.454479  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.456055  388805 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 20:03:58.457291  388805 addons.go:505] duration metric: took 936.060266ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 20:03:58.457353  388805 start.go:245] waiting for cluster config update ...
	I0419 20:03:58.457373  388805 start.go:254] writing updated cluster config ...
	I0419 20:03:58.459228  388805 out.go:177] 
	I0419 20:03:58.460969  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:58.461046  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:58.462709  388805 out.go:177] * Starting "ha-423356-m02" control-plane node in "ha-423356" cluster
	I0419 20:03:58.463880  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:58.463909  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:03:58.464002  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:03:58.464014  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:03:58.464081  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:58.464231  388805 start.go:360] acquireMachinesLock for ha-423356-m02: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:03:58.464277  388805 start.go:364] duration metric: took 27.912µs to acquireMachinesLock for "ha-423356-m02"
	I0419 20:03:58.464321  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:58.464397  388805 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0419 20:03:58.465857  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:03:58.465964  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:58.465995  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:58.480622  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0419 20:03:58.481060  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:58.481567  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:58.481595  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:58.481931  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:58.482144  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:03:58.482301  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:03:58.482550  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:03:58.482573  388805 client.go:168] LocalClient.Create starting
	I0419 20:03:58.482602  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:03:58.482733  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:58.483131  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:58.483279  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:03:58.483327  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:58.483344  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:58.483380  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:03:58.483392  388805 main.go:141] libmachine: (ha-423356-m02) Calling .PreCreateCheck
	I0419 20:03:58.483705  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:03:58.484286  388805 main.go:141] libmachine: Creating machine...
	I0419 20:03:58.484306  388805 main.go:141] libmachine: (ha-423356-m02) Calling .Create
	I0419 20:03:58.484536  388805 main.go:141] libmachine: (ha-423356-m02) Creating KVM machine...
	I0419 20:03:58.486150  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found existing default KVM network
	I0419 20:03:58.486180  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found existing private KVM network mk-ha-423356
	I0419 20:03:58.486328  388805 main.go:141] libmachine: (ha-423356-m02) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 ...
	I0419 20:03:58.486371  388805 main.go:141] libmachine: (ha-423356-m02) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:03:58.486390  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.486295  389210 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:58.486526  388805 main.go:141] libmachine: (ha-423356-m02) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:03:58.735178  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.735058  389210 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa...
	I0419 20:03:58.856065  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.855909  389210 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/ha-423356-m02.rawdisk...
	I0419 20:03:58.856105  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Writing magic tar header
	I0419 20:03:58.856121  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Writing SSH key tar header
	I0419 20:03:58.856134  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.856019  389210 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 ...
	I0419 20:03:58.856169  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02
	I0419 20:03:58.856198  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 (perms=drwx------)
	I0419 20:03:58.856208  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:03:58.856225  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:58.856237  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:03:58.856252  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:03:58.856264  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:03:58.856280  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:03:58.856290  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home
	I0419 20:03:58.856300  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Skipping /home - not owner
	I0419 20:03:58.856311  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:03:58.856325  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:03:58.856337  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:03:58.856350  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:03:58.856360  388805 main.go:141] libmachine: (ha-423356-m02) Creating domain...
	I0419 20:03:58.857502  388805 main.go:141] libmachine: (ha-423356-m02) define libvirt domain using xml: 
	I0419 20:03:58.857536  388805 main.go:141] libmachine: (ha-423356-m02) <domain type='kvm'>
	I0419 20:03:58.857548  388805 main.go:141] libmachine: (ha-423356-m02)   <name>ha-423356-m02</name>
	I0419 20:03:58.857557  388805 main.go:141] libmachine: (ha-423356-m02)   <memory unit='MiB'>2200</memory>
	I0419 20:03:58.857566  388805 main.go:141] libmachine: (ha-423356-m02)   <vcpu>2</vcpu>
	I0419 20:03:58.857573  388805 main.go:141] libmachine: (ha-423356-m02)   <features>
	I0419 20:03:58.857582  388805 main.go:141] libmachine: (ha-423356-m02)     <acpi/>
	I0419 20:03:58.857593  388805 main.go:141] libmachine: (ha-423356-m02)     <apic/>
	I0419 20:03:58.857601  388805 main.go:141] libmachine: (ha-423356-m02)     <pae/>
	I0419 20:03:58.857610  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.857619  388805 main.go:141] libmachine: (ha-423356-m02)   </features>
	I0419 20:03:58.857630  388805 main.go:141] libmachine: (ha-423356-m02)   <cpu mode='host-passthrough'>
	I0419 20:03:58.857641  388805 main.go:141] libmachine: (ha-423356-m02)   
	I0419 20:03:58.857648  388805 main.go:141] libmachine: (ha-423356-m02)   </cpu>
	I0419 20:03:58.857687  388805 main.go:141] libmachine: (ha-423356-m02)   <os>
	I0419 20:03:58.857714  388805 main.go:141] libmachine: (ha-423356-m02)     <type>hvm</type>
	I0419 20:03:58.857738  388805 main.go:141] libmachine: (ha-423356-m02)     <boot dev='cdrom'/>
	I0419 20:03:58.857750  388805 main.go:141] libmachine: (ha-423356-m02)     <boot dev='hd'/>
	I0419 20:03:58.857760  388805 main.go:141] libmachine: (ha-423356-m02)     <bootmenu enable='no'/>
	I0419 20:03:58.857771  388805 main.go:141] libmachine: (ha-423356-m02)   </os>
	I0419 20:03:58.857779  388805 main.go:141] libmachine: (ha-423356-m02)   <devices>
	I0419 20:03:58.857792  388805 main.go:141] libmachine: (ha-423356-m02)     <disk type='file' device='cdrom'>
	I0419 20:03:58.857808  388805 main.go:141] libmachine: (ha-423356-m02)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/boot2docker.iso'/>
	I0419 20:03:58.857820  388805 main.go:141] libmachine: (ha-423356-m02)       <target dev='hdc' bus='scsi'/>
	I0419 20:03:58.857828  388805 main.go:141] libmachine: (ha-423356-m02)       <readonly/>
	I0419 20:03:58.857839  388805 main.go:141] libmachine: (ha-423356-m02)     </disk>
	I0419 20:03:58.857852  388805 main.go:141] libmachine: (ha-423356-m02)     <disk type='file' device='disk'>
	I0419 20:03:58.857862  388805 main.go:141] libmachine: (ha-423356-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:03:58.857958  388805 main.go:141] libmachine: (ha-423356-m02)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/ha-423356-m02.rawdisk'/>
	I0419 20:03:58.858001  388805 main.go:141] libmachine: (ha-423356-m02)       <target dev='hda' bus='virtio'/>
	I0419 20:03:58.858037  388805 main.go:141] libmachine: (ha-423356-m02)     </disk>
	I0419 20:03:58.858062  388805 main.go:141] libmachine: (ha-423356-m02)     <interface type='network'>
	I0419 20:03:58.858078  388805 main.go:141] libmachine: (ha-423356-m02)       <source network='mk-ha-423356'/>
	I0419 20:03:58.858089  388805 main.go:141] libmachine: (ha-423356-m02)       <model type='virtio'/>
	I0419 20:03:58.858101  388805 main.go:141] libmachine: (ha-423356-m02)     </interface>
	I0419 20:03:58.858109  388805 main.go:141] libmachine: (ha-423356-m02)     <interface type='network'>
	I0419 20:03:58.858122  388805 main.go:141] libmachine: (ha-423356-m02)       <source network='default'/>
	I0419 20:03:58.858137  388805 main.go:141] libmachine: (ha-423356-m02)       <model type='virtio'/>
	I0419 20:03:58.858149  388805 main.go:141] libmachine: (ha-423356-m02)     </interface>
	I0419 20:03:58.858160  388805 main.go:141] libmachine: (ha-423356-m02)     <serial type='pty'>
	I0419 20:03:58.858170  388805 main.go:141] libmachine: (ha-423356-m02)       <target port='0'/>
	I0419 20:03:58.858181  388805 main.go:141] libmachine: (ha-423356-m02)     </serial>
	I0419 20:03:58.858192  388805 main.go:141] libmachine: (ha-423356-m02)     <console type='pty'>
	I0419 20:03:58.858203  388805 main.go:141] libmachine: (ha-423356-m02)       <target type='serial' port='0'/>
	I0419 20:03:58.858211  388805 main.go:141] libmachine: (ha-423356-m02)     </console>
	I0419 20:03:58.858222  388805 main.go:141] libmachine: (ha-423356-m02)     <rng model='virtio'>
	I0419 20:03:58.858276  388805 main.go:141] libmachine: (ha-423356-m02)       <backend model='random'>/dev/random</backend>
	I0419 20:03:58.858302  388805 main.go:141] libmachine: (ha-423356-m02)     </rng>
	I0419 20:03:58.858312  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.858319  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.858329  388805 main.go:141] libmachine: (ha-423356-m02)   </devices>
	I0419 20:03:58.858335  388805 main.go:141] libmachine: (ha-423356-m02) </domain>
	I0419 20:03:58.858346  388805 main.go:141] libmachine: (ha-423356-m02) 
	I0419 20:03:58.864683  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:a2:f4:56 in network default
	I0419 20:03:58.865277  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring networks are active...
	I0419 20:03:58.865331  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:03:58.866010  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring network default is active
	I0419 20:03:58.866409  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring network mk-ha-423356 is active
	I0419 20:03:58.866813  388805 main.go:141] libmachine: (ha-423356-m02) Getting domain xml...
	I0419 20:03:58.867785  388805 main.go:141] libmachine: (ha-423356-m02) Creating domain...
	I0419 20:04:00.103380  388805 main.go:141] libmachine: (ha-423356-m02) Waiting to get IP...
	I0419 20:04:00.104279  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.104655  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.104682  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.104617  389210 retry.go:31] will retry after 301.988537ms: waiting for machine to come up
	I0419 20:04:00.408594  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.409195  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.409225  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.409137  389210 retry.go:31] will retry after 329.946651ms: waiting for machine to come up
	I0419 20:04:00.740941  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.741447  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.741476  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.741399  389210 retry.go:31] will retry after 366.125678ms: waiting for machine to come up
	I0419 20:04:01.109032  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:01.109524  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:01.109552  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:01.109480  389210 retry.go:31] will retry after 439.45473ms: waiting for machine to come up
	I0419 20:04:01.550810  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:01.551168  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:01.551197  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:01.551123  389210 retry.go:31] will retry after 532.55463ms: waiting for machine to come up
	I0419 20:04:02.085482  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:02.085969  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:02.086006  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:02.085871  389210 retry.go:31] will retry after 914.829151ms: waiting for machine to come up
	I0419 20:04:03.003220  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:03.003698  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:03.003725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:03.003670  389210 retry.go:31] will retry after 876.494824ms: waiting for machine to come up
	I0419 20:04:03.881855  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:03.882385  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:03.882420  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:03.882332  389210 retry.go:31] will retry after 909.993683ms: waiting for machine to come up
	I0419 20:04:04.793769  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:04.794244  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:04.794283  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:04.794178  389210 retry.go:31] will retry after 1.551125756s: waiting for machine to come up
	I0419 20:04:06.347880  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:06.348387  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:06.348417  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:06.348339  389210 retry.go:31] will retry after 1.808278203s: waiting for machine to come up
	I0419 20:04:08.159309  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:08.159801  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:08.159830  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:08.159762  389210 retry.go:31] will retry after 2.259690381s: waiting for machine to come up
	I0419 20:04:10.421816  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:10.422252  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:10.422283  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:10.422207  389210 retry.go:31] will retry after 2.687448152s: waiting for machine to come up
	I0419 20:04:13.112750  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:13.113160  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:13.113185  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:13.113125  389210 retry.go:31] will retry after 3.825664412s: waiting for machine to come up
	I0419 20:04:16.941639  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:16.942275  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:16.942306  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:16.942220  389210 retry.go:31] will retry after 3.97876348s: waiting for machine to come up
	I0419 20:04:20.922725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.923228  388805 main.go:141] libmachine: (ha-423356-m02) Found IP for machine: 192.168.39.121
	I0419 20:04:20.923258  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has current primary IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.923266  388805 main.go:141] libmachine: (ha-423356-m02) Reserving static IP address...
	I0419 20:04:20.923543  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find host DHCP lease matching {name: "ha-423356-m02", mac: "52:54:00:1e:9f:96", ip: "192.168.39.121"} in network mk-ha-423356
	I0419 20:04:20.996198  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Getting to WaitForSSH function...
	I0419 20:04:20.996229  388805 main.go:141] libmachine: (ha-423356-m02) Reserved static IP address: 192.168.39.121
	I0419 20:04:20.996246  388805 main.go:141] libmachine: (ha-423356-m02) Waiting for SSH to be available...
	I0419 20:04:20.998614  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.998954  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356
	I0419 20:04:20.998984  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find defined IP address of network mk-ha-423356 interface with MAC address 52:54:00:1e:9f:96
	I0419 20:04:20.999160  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH client type: external
	I0419 20:04:20.999182  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa (-rw-------)
	I0419 20:04:20.999204  388805 main.go:141] libmachine: (ha-423356-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:04:20.999217  388805 main.go:141] libmachine: (ha-423356-m02) DBG | About to run SSH command:
	I0419 20:04:20.999226  388805 main.go:141] libmachine: (ha-423356-m02) DBG | exit 0
	I0419 20:04:21.002750  388805 main.go:141] libmachine: (ha-423356-m02) DBG | SSH cmd err, output: exit status 255: 
	I0419 20:04:21.002777  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0419 20:04:21.002787  388805 main.go:141] libmachine: (ha-423356-m02) DBG | command : exit 0
	I0419 20:04:21.002794  388805 main.go:141] libmachine: (ha-423356-m02) DBG | err     : exit status 255
	I0419 20:04:21.002804  388805 main.go:141] libmachine: (ha-423356-m02) DBG | output  : 
	I0419 20:04:24.003077  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Getting to WaitForSSH function...
	I0419 20:04:24.005617  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.006044  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.006082  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.006267  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH client type: external
	I0419 20:04:24.006289  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa (-rw-------)
	I0419 20:04:24.006321  388805 main.go:141] libmachine: (ha-423356-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:04:24.006335  388805 main.go:141] libmachine: (ha-423356-m02) DBG | About to run SSH command:
	I0419 20:04:24.006347  388805 main.go:141] libmachine: (ha-423356-m02) DBG | exit 0
	I0419 20:04:24.132672  388805 main.go:141] libmachine: (ha-423356-m02) DBG | SSH cmd err, output: <nil>: 
	I0419 20:04:24.132980  388805 main.go:141] libmachine: (ha-423356-m02) KVM machine creation complete!
	I0419 20:04:24.133286  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:04:24.133877  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:24.134108  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:24.134297  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:04:24.134311  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:04:24.135689  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:04:24.135708  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:04:24.135716  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:04:24.135724  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.138624  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.139008  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.139053  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.139188  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.139379  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.139544  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.139718  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.139900  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.140110  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.140122  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:04:24.252119  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:04:24.252150  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:04:24.252163  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.255034  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.255430  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.255464  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.255566  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.255800  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.255957  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.256083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.256231  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.256462  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.256475  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:04:24.373877  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:04:24.373955  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:04:24.373965  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:04:24.373977  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.374255  388805 buildroot.go:166] provisioning hostname "ha-423356-m02"
	I0419 20:04:24.374292  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.374491  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.377249  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.377560  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.377586  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.377725  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.377916  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.378083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.378237  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.378452  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.378673  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.378688  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356-m02 && echo "ha-423356-m02" | sudo tee /etc/hostname
	I0419 20:04:24.507630  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356-m02
	
	I0419 20:04:24.507662  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.510376  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.510725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.510753  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.510945  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.511149  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.511305  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.511436  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.511661  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.511893  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.511917  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:04:24.640048  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:04:24.640084  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:04:24.640106  388805 buildroot.go:174] setting up certificates
	I0419 20:04:24.640117  388805 provision.go:84] configureAuth start
	I0419 20:04:24.640126  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.640458  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:24.643287  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.643718  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.643749  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.643879  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.646037  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.646425  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.646460  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.646561  388805 provision.go:143] copyHostCerts
	I0419 20:04:24.646602  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:04:24.646646  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:04:24.646656  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:04:24.646735  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:04:24.646844  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:04:24.646872  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:04:24.646882  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:04:24.646928  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:04:24.646994  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:04:24.647014  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:04:24.647023  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:04:24.647059  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:04:24.647159  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356-m02 san=[127.0.0.1 192.168.39.121 ha-423356-m02 localhost minikube]
	I0419 20:04:24.759734  388805 provision.go:177] copyRemoteCerts
	I0419 20:04:24.759806  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:04:24.759838  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.762761  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.763115  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.763155  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.763329  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.763577  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.763820  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.764004  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:24.850831  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:04:24.850902  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:04:24.877207  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:04:24.877283  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 20:04:24.904395  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:04:24.904486  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:04:24.931218  388805 provision.go:87] duration metric: took 291.084326ms to configureAuth
	I0419 20:04:24.931255  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:04:24.931510  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:24.931604  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.933978  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.934300  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.934333  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.934484  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.934740  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.934923  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.935083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.935258  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.935426  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.935441  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:04:25.205312  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:04:25.205345  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:04:25.205355  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetURL
	I0419 20:04:25.206877  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using libvirt version 6000000
	I0419 20:04:25.209278  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.209606  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.209638  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.209756  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:04:25.209784  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:04:25.209792  388805 client.go:171] duration metric: took 26.727212066s to LocalClient.Create
	I0419 20:04:25.209824  388805 start.go:167] duration metric: took 26.72727434s to libmachine.API.Create "ha-423356"
	I0419 20:04:25.209838  388805 start.go:293] postStartSetup for "ha-423356-m02" (driver="kvm2")
	I0419 20:04:25.209851  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:04:25.209895  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.210140  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:04:25.210180  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.212346  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.212723  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.212751  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.212910  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.213100  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.213312  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.213471  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.299433  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:04:25.303605  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:04:25.303629  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:04:25.303688  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:04:25.303760  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:04:25.303771  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:04:25.303848  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:04:25.313106  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:04:25.339239  388805 start.go:296] duration metric: took 129.382003ms for postStartSetup
	I0419 20:04:25.339310  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:04:25.340042  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:25.343152  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.343551  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.343575  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.343877  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:04:25.344115  388805 start.go:128] duration metric: took 26.879707029s to createHost
	I0419 20:04:25.344145  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.346668  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.347031  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.347061  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.347199  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.347390  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.347578  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.347732  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.347878  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:25.348068  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:25.348082  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:04:25.462004  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557065.438264488
	
	I0419 20:04:25.462031  388805 fix.go:216] guest clock: 1713557065.438264488
	I0419 20:04:25.462043  388805 fix.go:229] Guest: 2024-04-19 20:04:25.438264488 +0000 UTC Remote: 2024-04-19 20:04:25.34413101 +0000 UTC m=+82.553656179 (delta=94.133478ms)
	I0419 20:04:25.462065  388805 fix.go:200] guest clock delta is within tolerance: 94.133478ms
	I0419 20:04:25.462074  388805 start.go:83] releasing machines lock for "ha-423356-m02", held for 26.997761469s
	I0419 20:04:25.462094  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.462403  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:25.465241  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.465622  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.465647  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.467808  388805 out.go:177] * Found network options:
	I0419 20:04:25.469538  388805 out.go:177]   - NO_PROXY=192.168.39.7
	W0419 20:04:25.470899  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:04:25.470952  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471544  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471736  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471854  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:04:25.471892  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	W0419 20:04:25.471975  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:04:25.472056  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:04:25.472079  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.474681  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.474921  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475069  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.475097  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475198  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.475373  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.475396  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.475405  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475532  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.475642  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.475738  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.475779  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.475901  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.476062  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.719884  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:04:25.726092  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:04:25.726171  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:04:25.744358  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:04:25.744385  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:04:25.744462  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:04:25.761288  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:04:25.776675  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:04:25.776741  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:04:25.791962  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:04:25.807595  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:04:25.932781  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:04:26.098054  388805 docker.go:233] disabling docker service ...
	I0419 20:04:26.098156  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:04:26.113552  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:04:26.127664  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:04:26.264626  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:04:26.401744  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:04:26.425709  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:04:26.444838  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:04:26.444908  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.456338  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:04:26.456415  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.467527  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.478697  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.490090  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:04:26.501627  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.512690  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.530120  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.541203  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:04:26.551166  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:04:26.551233  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:04:26.565454  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:04:26.575513  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:26.699963  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:04:26.840670  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:04:26.840742  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:04:26.845752  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:04:26.845820  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:04:26.849940  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:04:26.886986  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:04:26.887081  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:04:26.917238  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:04:26.949136  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:04:26.950646  388805 out.go:177]   - env NO_PROXY=192.168.39.7
	I0419 20:04:26.951909  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:26.954523  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:26.954827  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:26.954856  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:26.955133  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:04:26.959642  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:04:26.973554  388805 mustload.go:65] Loading cluster: ha-423356
	I0419 20:04:26.973817  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:26.974219  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:26.974286  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:26.988968  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0419 20:04:26.989468  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:26.989958  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:26.989980  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:26.990257  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:26.990426  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:04:26.991803  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:04:26.992160  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:26.992197  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:27.006957  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0419 20:04:27.007362  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:27.007780  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:27.007801  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:27.008094  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:27.008283  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:04:27.008439  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.121
	I0419 20:04:27.008456  388805 certs.go:194] generating shared ca certs ...
	I0419 20:04:27.008472  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.008602  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:04:27.008670  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:04:27.008683  388805 certs.go:256] generating profile certs ...
	I0419 20:04:27.008756  388805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:04:27.008780  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109
	I0419 20:04:27.008793  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.254]
	I0419 20:04:27.112885  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 ...
	I0419 20:04:27.112916  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109: {Name:mk4864f27249bc288f458043f35d6f5de535ec40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.113085  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109 ...
	I0419 20:04:27.113102  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109: {Name:mk25afeda6db79edfc338a633462b0b1fad5f92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.113171  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:04:27.113300  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:04:27.113426  388805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:04:27.113444  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:04:27.113462  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:04:27.113475  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:04:27.113486  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:04:27.113496  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:04:27.113506  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:04:27.113526  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:04:27.113538  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:04:27.113584  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:04:27.113612  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:04:27.113622  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:04:27.113641  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:04:27.113661  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:04:27.113685  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:04:27.113727  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:04:27.113755  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.113769  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.113785  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.113817  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:04:27.117111  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:27.117538  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:04:27.117573  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:27.117790  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:04:27.118005  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:04:27.118157  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:04:27.118326  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:04:27.189115  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 20:04:27.194749  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 20:04:27.206481  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 20:04:27.211441  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0419 20:04:27.222812  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 20:04:27.227348  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 20:04:27.238181  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 20:04:27.242239  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 20:04:27.253430  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 20:04:27.257901  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 20:04:27.269371  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 20:04:27.273332  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0419 20:04:27.284451  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:04:27.311712  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:04:27.339004  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:04:27.365881  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:04:27.391152  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0419 20:04:27.416177  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:04:27.440792  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:04:27.465655  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:04:27.490454  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:04:27.515673  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:04:27.540428  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:04:27.565533  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 20:04:27.582912  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0419 20:04:27.599989  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 20:04:27.618147  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 20:04:27.635953  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 20:04:27.653125  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0419 20:04:27.669991  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0419 20:04:27.687066  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:04:27.692808  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:04:27.704737  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.709286  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.709347  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.715005  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:04:27.727274  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:04:27.739271  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.744062  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.744141  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.750024  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:04:27.762646  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:04:27.775057  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.779743  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.779803  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.785629  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:04:27.797114  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:04:27.801329  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:04:27.801377  388805 kubeadm.go:928] updating node {m02 192.168.39.121 8443 v1.30.0 crio true true} ...
	I0419 20:04:27.801458  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:04:27.801483  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:04:27.801516  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:04:27.822225  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:04:27.822294  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:04:27.822359  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:04:27.834421  388805 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 20:04:27.834487  388805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 20:04:27.845841  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 20:04:27.845857  388805 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0419 20:04:27.847428  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:04:27.845909  388805 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0419 20:04:27.847520  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:04:27.852828  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 20:04:27.852861  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 20:04:28.645040  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:04:28.645122  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:04:28.650303  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 20:04:28.650332  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 20:04:29.207327  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:04:29.223643  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:04:29.223751  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:04:29.228428  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 20:04:29.228474  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 20:04:29.669767  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 20:04:29.679434  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 20:04:29.696503  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:04:29.713269  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:04:29.730434  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:04:29.734296  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:04:29.746392  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:29.861502  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:04:29.878810  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:04:29.879257  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:29.879314  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:29.897676  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0419 20:04:29.898109  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:29.898817  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:29.898848  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:29.899195  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:29.899446  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:04:29.899613  388805 start.go:316] joinCluster: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:04:29.899780  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 20:04:29.899809  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:04:29.902828  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:29.903241  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:04:29.903274  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:29.903488  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:04:29.903657  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:04:29.903816  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:04:29.903975  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:04:30.056745  388805 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:04:30.056794  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c842jz.8vslh2ec722m2dzi --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m02 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443"
	I0419 20:04:52.832830  388805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c842jz.8vslh2ec722m2dzi --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m02 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443": (22.776004793s)
	I0419 20:04:52.832907  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 20:04:53.461792  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356-m02 minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=false
	I0419 20:04:53.593929  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-423356-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 20:04:53.708722  388805 start.go:318] duration metric: took 23.809094342s to joinCluster
	I0419 20:04:53.708808  388805 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:04:53.710449  388805 out.go:177] * Verifying Kubernetes components...
	I0419 20:04:53.709161  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:53.711797  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:54.003770  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:04:54.056560  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:04:54.056922  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 20:04:54.057003  388805 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0419 20:04:54.057371  388805 node_ready.go:35] waiting up to 6m0s for node "ha-423356-m02" to be "Ready" ...
	I0419 20:04:54.057519  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:54.057529  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:54.057539  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:54.057545  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:54.066381  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:04:54.557618  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:54.557650  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:54.557663  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:54.557670  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:54.565308  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:04:55.057966  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:55.057994  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:55.058006  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:55.058013  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:55.061621  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:55.557695  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:55.557725  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:55.557737  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:55.557743  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:55.563203  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:04:56.058415  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:56.058439  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:56.058455  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:56.058459  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:56.061535  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:56.062152  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:04:56.558591  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:56.558615  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:56.558627  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:56.558631  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:56.562428  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:57.058414  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:57.058438  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:57.058446  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:57.058451  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:57.063025  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:57.558307  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:57.558330  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:57.558343  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:57.558348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:57.561906  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:58.058234  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:58.058264  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:58.058275  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:58.058282  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:58.061797  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:58.062796  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:04:58.558459  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:58.558484  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:58.558496  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:58.558501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:58.562596  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:59.057578  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:59.057601  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:59.057610  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:59.057614  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:59.061733  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:59.557891  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:59.557915  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:59.557921  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:59.557926  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:59.561788  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.057624  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:00.057651  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:00.057660  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:00.057664  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:00.061090  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.557862  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:00.557886  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:00.557895  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:00.557899  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:00.561563  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.562209  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:05:01.058577  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:01.058596  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:01.058604  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:01.058608  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:01.061851  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:01.557745  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:01.557767  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:01.557775  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:01.557779  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:01.560837  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.058097  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.058118  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.058128  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.058133  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.061563  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.062383  388805 node_ready.go:49] node "ha-423356-m02" has status "Ready":"True"
	I0419 20:05:02.062402  388805 node_ready.go:38] duration metric: took 8.005005431s for node "ha-423356-m02" to be "Ready" ...
	I0419 20:05:02.062412  388805 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:05:02.062477  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:02.062487  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.062494  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.062501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.067129  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:02.074572  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.074680  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9td9f
	I0419 20:05:02.074691  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.074702  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.074708  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.077943  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.078761  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.078778  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.078785  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.078788  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.081115  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.081654  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.081672  388805 pod_ready.go:81] duration metric: took 7.074672ms for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.081684  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.081742  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rr7zk
	I0419 20:05:02.081751  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.081761  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.081766  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.087602  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:05:02.088318  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.088333  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.088343  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.088348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.090938  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.091355  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.091372  388805 pod_ready.go:81] duration metric: took 9.680689ms for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.091385  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.091453  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356
	I0419 20:05:02.091463  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.091473  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.091477  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.093843  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.094411  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.094426  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.094433  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.094436  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.096788  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.097353  388805 pod_ready.go:92] pod "etcd-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.097373  388805 pod_ready.go:81] duration metric: took 5.980968ms for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.097385  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.097442  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:02.097453  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.097461  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.097465  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.100214  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.101285  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.101305  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.101316  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.101321  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.104469  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.597964  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:02.597994  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.598006  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.598013  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.601870  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.602469  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.602488  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.602496  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.602500  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.605084  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:03.098149  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:03.098225  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.098254  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.098264  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.103071  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:03.103767  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:03.103788  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.103798  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.103803  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.107143  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:03.597583  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:03.597608  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.597616  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.597620  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.602124  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:03.602913  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:03.602928  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.602936  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.602939  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.605986  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:04.097593  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:04.097619  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.097626  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.097631  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.101805  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:04.102425  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:04.102443  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.102457  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.102462  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.105298  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:04.105926  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:04.598430  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:04.598463  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.598472  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.598482  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.601964  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:04.602882  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:04.602897  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.602905  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.602909  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.606297  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.098020  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:05.098044  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.098052  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.098057  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.106939  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:05:05.108186  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:05.108202  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.108209  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.108214  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.111314  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.598398  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:05.598427  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.598441  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.598447  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.601475  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.602428  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:05.602443  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.602453  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.602458  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.605156  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:06.098016  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:06.098038  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.098047  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.098051  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.102959  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.103638  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:06.103657  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.103667  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.103673  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.108625  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.109230  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:06.597675  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:06.597703  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.597711  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.597715  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.601802  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.602369  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:06.602385  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.602393  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.602397  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.605226  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:07.098240  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:07.098266  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.098274  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.098277  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.101799  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:07.102362  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:07.102379  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.102387  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.102392  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.105211  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:07.598377  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:07.598402  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.598411  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.598415  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.602777  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:07.603529  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:07.603550  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.603559  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.603564  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.606643  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.097602  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:08.097628  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.097637  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.097643  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.101330  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.102029  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:08.102050  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.102061  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.102069  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.104444  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:08.598110  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:08.598134  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.598143  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.598147  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.601785  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.602707  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:08.602723  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.602732  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.602736  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.605766  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.606378  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:09.097650  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:09.097675  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.097683  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.097688  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.101223  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:09.101920  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:09.101942  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.101951  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.101957  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.104477  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:09.598304  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:09.598332  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.598345  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.598350  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.602825  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:09.603439  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:09.603460  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.603468  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.603473  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.606467  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.098265  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:10.098290  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.098296  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.098300  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.102294  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.102878  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.102899  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.102910  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.102915  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.106044  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.106652  388805 pod_ready.go:92] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.106671  388805 pod_ready.go:81] duration metric: took 8.009279266s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.106685  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.106735  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356
	I0419 20:05:10.106744  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.106751  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.106756  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.114415  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:05:10.115333  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.115352  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.115363  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.115367  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.118412  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.118998  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.119020  388805 pod_ready.go:81] duration metric: took 12.325135ms for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.119042  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.119105  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m02
	I0419 20:05:10.119116  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.119126  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.119131  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.121490  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.121978  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.121993  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.122002  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.122009  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.124229  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.124703  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.124722  388805 pod_ready.go:81] duration metric: took 5.671466ms for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.124734  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.124800  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:05:10.124810  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.124819  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.124824  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.127165  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.127840  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.127860  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.127870  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.127877  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.130135  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.130707  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.130722  388805 pod_ready.go:81] duration metric: took 5.979961ms for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.130733  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.130787  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:05:10.130797  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.130806  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.130810  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.133166  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.133771  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.133785  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.133794  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.133801  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.135891  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.136460  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.136480  388805 pod_ready.go:81] duration metric: took 5.738438ms for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.136492  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.298923  388805 request.go:629] Waited for 162.36358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:05:10.299007  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:05:10.299015  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.299024  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.299033  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.302431  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.498366  388805 request.go:629] Waited for 195.307939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.498439  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.498446  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.498455  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.498464  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.502509  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:10.503245  388805 pod_ready.go:92] pod "kube-proxy-chd2r" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.503262  388805 pod_ready.go:81] duration metric: took 366.759375ms for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.503273  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.698393  388805 request.go:629] Waited for 195.054471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:05:10.698464  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:05:10.698469  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.698475  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.698479  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.702111  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.898455  388805 request.go:629] Waited for 195.304748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.898545  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.898564  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.898575  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.898585  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.902958  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:10.903675  388805 pod_ready.go:92] pod "kube-proxy-d56ch" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.903694  388805 pod_ready.go:81] duration metric: took 400.412836ms for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.903704  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.098852  388805 request.go:629] Waited for 195.077426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:05:11.098934  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:05:11.098939  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.098947  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.098951  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.101972  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.299053  388805 request.go:629] Waited for 196.392329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:11.299129  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:11.299143  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.299155  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.299161  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.302455  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.303119  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:11.303138  388805 pod_ready.go:81] duration metric: took 399.428494ms for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.303149  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.499247  388805 request.go:629] Waited for 196.020056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:05:11.499350  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:05:11.499360  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.499371  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.499381  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.502985  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.699219  388805 request.go:629] Waited for 195.355641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:11.699290  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:11.699294  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.699302  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.699308  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.703264  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.705144  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:11.705171  388805 pod_ready.go:81] duration metric: took 402.010035ms for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.705186  388805 pod_ready.go:38] duration metric: took 9.642760124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:05:11.705212  388805 api_server.go:52] waiting for apiserver process to appear ...
	I0419 20:05:11.705283  388805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:05:11.722507  388805 api_server.go:72] duration metric: took 18.013656423s to wait for apiserver process to appear ...
	I0419 20:05:11.722535  388805 api_server.go:88] waiting for apiserver healthz status ...
	I0419 20:05:11.722558  388805 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0419 20:05:11.726921  388805 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0419 20:05:11.726985  388805 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0419 20:05:11.726993  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.727001  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.727009  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.728059  388805 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 20:05:11.728156  388805 api_server.go:141] control plane version: v1.30.0
	I0419 20:05:11.728175  388805 api_server.go:131] duration metric: took 5.633096ms to wait for apiserver health ...
	I0419 20:05:11.728185  388805 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 20:05:11.898611  388805 request.go:629] Waited for 170.337538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:11.898681  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:11.898689  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.898699  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.898704  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.905985  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:05:11.911708  388805 system_pods.go:59] 17 kube-system pods found
	I0419 20:05:11.911737  388805 system_pods.go:61] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:05:11.911742  388805 system_pods.go:61] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:05:11.911746  388805 system_pods.go:61] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:05:11.911749  388805 system_pods.go:61] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:05:11.911757  388805 system_pods.go:61] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:05:11.911762  388805 system_pods.go:61] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:05:11.911768  388805 system_pods.go:61] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:05:11.911773  388805 system_pods.go:61] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:05:11.911777  388805 system_pods.go:61] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:05:11.911785  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:05:11.911793  388805 system_pods.go:61] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:05:11.911798  388805 system_pods.go:61] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:05:11.911805  388805 system_pods.go:61] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:05:11.911814  388805 system_pods.go:61] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:05:11.911818  388805 system_pods.go:61] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:05:11.911826  388805 system_pods.go:61] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:05:11.911830  388805 system_pods.go:61] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:05:11.911836  388805 system_pods.go:74] duration metric: took 183.642064ms to wait for pod list to return data ...
	I0419 20:05:11.911846  388805 default_sa.go:34] waiting for default service account to be created ...
	I0419 20:05:12.098707  388805 request.go:629] Waited for 186.785435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:05:12.098781  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:05:12.098792  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.098802  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.098808  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.102336  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:12.102580  388805 default_sa.go:45] found service account: "default"
	I0419 20:05:12.102598  388805 default_sa.go:55] duration metric: took 190.7419ms for default service account to be created ...
	I0419 20:05:12.102614  388805 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 20:05:12.299071  388805 request.go:629] Waited for 196.355569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:12.299132  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:12.299136  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.299145  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.299148  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.304086  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:12.308418  388805 system_pods.go:86] 17 kube-system pods found
	I0419 20:05:12.308445  388805 system_pods.go:89] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:05:12.308450  388805 system_pods.go:89] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:05:12.308454  388805 system_pods.go:89] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:05:12.308459  388805 system_pods.go:89] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:05:12.308465  388805 system_pods.go:89] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:05:12.308471  388805 system_pods.go:89] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:05:12.308477  388805 system_pods.go:89] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:05:12.308483  388805 system_pods.go:89] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:05:12.308495  388805 system_pods.go:89] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:05:12.308502  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:05:12.308508  388805 system_pods.go:89] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:05:12.308520  388805 system_pods.go:89] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:05:12.308524  388805 system_pods.go:89] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:05:12.308528  388805 system_pods.go:89] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:05:12.308532  388805 system_pods.go:89] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:05:12.308537  388805 system_pods.go:89] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:05:12.308543  388805 system_pods.go:89] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:05:12.308550  388805 system_pods.go:126] duration metric: took 205.927011ms to wait for k8s-apps to be running ...
	I0419 20:05:12.308557  388805 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 20:05:12.308617  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:05:12.324115  388805 system_svc.go:56] duration metric: took 15.544914ms WaitForService to wait for kubelet
	I0419 20:05:12.324156  388805 kubeadm.go:576] duration metric: took 18.61530927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:05:12.324187  388805 node_conditions.go:102] verifying NodePressure condition ...
	I0419 20:05:12.498606  388805 request.go:629] Waited for 174.323457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0419 20:05:12.498667  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0419 20:05:12.498672  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.498680  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.498684  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.503330  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:12.504298  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:05:12.504326  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:05:12.504347  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:05:12.504353  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:05:12.504360  388805 node_conditions.go:105] duration metric: took 180.166674ms to run NodePressure ...
	I0419 20:05:12.504376  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:05:12.504407  388805 start.go:254] writing updated cluster config ...
	I0419 20:05:12.509120  388805 out.go:177] 
	I0419 20:05:12.510974  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:12.511110  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:12.512994  388805 out.go:177] * Starting "ha-423356-m03" control-plane node in "ha-423356" cluster
	I0419 20:05:12.514233  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:05:12.514273  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:05:12.514402  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:05:12.514424  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:05:12.514540  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:12.514767  388805 start.go:360] acquireMachinesLock for ha-423356-m03: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:05:12.514822  388805 start.go:364] duration metric: took 30.511µs to acquireMachinesLock for "ha-423356-m03"
	I0419 20:05:12.514848  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:05:12.514987  388805 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0419 20:05:12.516593  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:05:12.516713  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:12.516764  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:12.531598  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0419 20:05:12.532029  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:12.532471  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:12.532492  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:12.532853  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:12.533062  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:12.533281  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:12.533484  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:05:12.533518  388805 client.go:168] LocalClient.Create starting
	I0419 20:05:12.533555  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:05:12.533598  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:05:12.533621  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:05:12.533678  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:05:12.533698  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:05:12.533709  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:05:12.533726  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:05:12.533738  388805 main.go:141] libmachine: (ha-423356-m03) Calling .PreCreateCheck
	I0419 20:05:12.533917  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:12.534353  388805 main.go:141] libmachine: Creating machine...
	I0419 20:05:12.534372  388805 main.go:141] libmachine: (ha-423356-m03) Calling .Create
	I0419 20:05:12.534489  388805 main.go:141] libmachine: (ha-423356-m03) Creating KVM machine...
	I0419 20:05:12.535689  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found existing default KVM network
	I0419 20:05:12.535867  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found existing private KVM network mk-ha-423356
	I0419 20:05:12.536025  388805 main.go:141] libmachine: (ha-423356-m03) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 ...
	I0419 20:05:12.536049  388805 main.go:141] libmachine: (ha-423356-m03) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:05:12.536128  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.536016  389598 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:05:12.536197  388805 main.go:141] libmachine: (ha-423356-m03) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:05:12.781196  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.781088  389598 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa...
	I0419 20:05:12.955595  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.955479  389598 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/ha-423356-m03.rawdisk...
	I0419 20:05:12.955629  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Writing magic tar header
	I0419 20:05:12.955645  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Writing SSH key tar header
	I0419 20:05:12.955662  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.955625  389598 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 ...
	I0419 20:05:12.955793  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03
	I0419 20:05:12.955813  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:05:12.955826  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 (perms=drwx------)
	I0419 20:05:12.955838  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:05:12.955845  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:05:12.955855  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:05:12.955864  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:05:12.955876  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:05:12.955888  388805 main.go:141] libmachine: (ha-423356-m03) Creating domain...
	I0419 20:05:12.955900  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:05:12.955916  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:05:12.955927  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:05:12.955933  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:05:12.955938  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home
	I0419 20:05:12.955949  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Skipping /home - not owner
	I0419 20:05:12.957016  388805 main.go:141] libmachine: (ha-423356-m03) define libvirt domain using xml: 
	I0419 20:05:12.957039  388805 main.go:141] libmachine: (ha-423356-m03) <domain type='kvm'>
	I0419 20:05:12.957049  388805 main.go:141] libmachine: (ha-423356-m03)   <name>ha-423356-m03</name>
	I0419 20:05:12.957060  388805 main.go:141] libmachine: (ha-423356-m03)   <memory unit='MiB'>2200</memory>
	I0419 20:05:12.957068  388805 main.go:141] libmachine: (ha-423356-m03)   <vcpu>2</vcpu>
	I0419 20:05:12.957077  388805 main.go:141] libmachine: (ha-423356-m03)   <features>
	I0419 20:05:12.957086  388805 main.go:141] libmachine: (ha-423356-m03)     <acpi/>
	I0419 20:05:12.957096  388805 main.go:141] libmachine: (ha-423356-m03)     <apic/>
	I0419 20:05:12.957103  388805 main.go:141] libmachine: (ha-423356-m03)     <pae/>
	I0419 20:05:12.957112  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957123  388805 main.go:141] libmachine: (ha-423356-m03)   </features>
	I0419 20:05:12.957133  388805 main.go:141] libmachine: (ha-423356-m03)   <cpu mode='host-passthrough'>
	I0419 20:05:12.957174  388805 main.go:141] libmachine: (ha-423356-m03)   
	I0419 20:05:12.957200  388805 main.go:141] libmachine: (ha-423356-m03)   </cpu>
	I0419 20:05:12.957214  388805 main.go:141] libmachine: (ha-423356-m03)   <os>
	I0419 20:05:12.957225  388805 main.go:141] libmachine: (ha-423356-m03)     <type>hvm</type>
	I0419 20:05:12.957237  388805 main.go:141] libmachine: (ha-423356-m03)     <boot dev='cdrom'/>
	I0419 20:05:12.957247  388805 main.go:141] libmachine: (ha-423356-m03)     <boot dev='hd'/>
	I0419 20:05:12.957263  388805 main.go:141] libmachine: (ha-423356-m03)     <bootmenu enable='no'/>
	I0419 20:05:12.957277  388805 main.go:141] libmachine: (ha-423356-m03)   </os>
	I0419 20:05:12.957316  388805 main.go:141] libmachine: (ha-423356-m03)   <devices>
	I0419 20:05:12.957341  388805 main.go:141] libmachine: (ha-423356-m03)     <disk type='file' device='cdrom'>
	I0419 20:05:12.957364  388805 main.go:141] libmachine: (ha-423356-m03)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/boot2docker.iso'/>
	I0419 20:05:12.957381  388805 main.go:141] libmachine: (ha-423356-m03)       <target dev='hdc' bus='scsi'/>
	I0419 20:05:12.957395  388805 main.go:141] libmachine: (ha-423356-m03)       <readonly/>
	I0419 20:05:12.957405  388805 main.go:141] libmachine: (ha-423356-m03)     </disk>
	I0419 20:05:12.957418  388805 main.go:141] libmachine: (ha-423356-m03)     <disk type='file' device='disk'>
	I0419 20:05:12.957430  388805 main.go:141] libmachine: (ha-423356-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:05:12.957447  388805 main.go:141] libmachine: (ha-423356-m03)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/ha-423356-m03.rawdisk'/>
	I0419 20:05:12.957468  388805 main.go:141] libmachine: (ha-423356-m03)       <target dev='hda' bus='virtio'/>
	I0419 20:05:12.957487  388805 main.go:141] libmachine: (ha-423356-m03)     </disk>
	I0419 20:05:12.957502  388805 main.go:141] libmachine: (ha-423356-m03)     <interface type='network'>
	I0419 20:05:12.957515  388805 main.go:141] libmachine: (ha-423356-m03)       <source network='mk-ha-423356'/>
	I0419 20:05:12.957523  388805 main.go:141] libmachine: (ha-423356-m03)       <model type='virtio'/>
	I0419 20:05:12.957534  388805 main.go:141] libmachine: (ha-423356-m03)     </interface>
	I0419 20:05:12.957548  388805 main.go:141] libmachine: (ha-423356-m03)     <interface type='network'>
	I0419 20:05:12.957559  388805 main.go:141] libmachine: (ha-423356-m03)       <source network='default'/>
	I0419 20:05:12.957573  388805 main.go:141] libmachine: (ha-423356-m03)       <model type='virtio'/>
	I0419 20:05:12.957584  388805 main.go:141] libmachine: (ha-423356-m03)     </interface>
	I0419 20:05:12.957594  388805 main.go:141] libmachine: (ha-423356-m03)     <serial type='pty'>
	I0419 20:05:12.957604  388805 main.go:141] libmachine: (ha-423356-m03)       <target port='0'/>
	I0419 20:05:12.957623  388805 main.go:141] libmachine: (ha-423356-m03)     </serial>
	I0419 20:05:12.957640  388805 main.go:141] libmachine: (ha-423356-m03)     <console type='pty'>
	I0419 20:05:12.957657  388805 main.go:141] libmachine: (ha-423356-m03)       <target type='serial' port='0'/>
	I0419 20:05:12.957667  388805 main.go:141] libmachine: (ha-423356-m03)     </console>
	I0419 20:05:12.957680  388805 main.go:141] libmachine: (ha-423356-m03)     <rng model='virtio'>
	I0419 20:05:12.957692  388805 main.go:141] libmachine: (ha-423356-m03)       <backend model='random'>/dev/random</backend>
	I0419 20:05:12.957701  388805 main.go:141] libmachine: (ha-423356-m03)     </rng>
	I0419 20:05:12.957715  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957730  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957744  388805 main.go:141] libmachine: (ha-423356-m03)   </devices>
	I0419 20:05:12.957757  388805 main.go:141] libmachine: (ha-423356-m03) </domain>
	I0419 20:05:12.957763  388805 main.go:141] libmachine: (ha-423356-m03) 
	I0419 20:05:12.965531  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:7f:8d:21 in network default
	I0419 20:05:12.966109  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:12.966125  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring networks are active...
	I0419 20:05:12.966850  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring network default is active
	I0419 20:05:12.967203  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring network mk-ha-423356 is active
	I0419 20:05:12.967578  388805 main.go:141] libmachine: (ha-423356-m03) Getting domain xml...
	I0419 20:05:12.968345  388805 main.go:141] libmachine: (ha-423356-m03) Creating domain...
	I0419 20:05:14.183762  388805 main.go:141] libmachine: (ha-423356-m03) Waiting to get IP...
	I0419 20:05:14.184701  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.185167  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.185234  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.185160  389598 retry.go:31] will retry after 283.969012ms: waiting for machine to come up
	I0419 20:05:14.470670  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.470995  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.471029  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.470977  389598 retry.go:31] will retry after 384.20274ms: waiting for machine to come up
	I0419 20:05:14.856501  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.856943  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.856971  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.856902  389598 retry.go:31] will retry after 483.55961ms: waiting for machine to come up
	I0419 20:05:15.341765  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:15.342311  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:15.342342  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:15.342260  389598 retry.go:31] will retry after 489.203595ms: waiting for machine to come up
	I0419 20:05:15.832901  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:15.833411  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:15.833449  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:15.833362  389598 retry.go:31] will retry after 553.302739ms: waiting for machine to come up
	I0419 20:05:16.387965  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:16.388388  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:16.388422  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:16.388323  389598 retry.go:31] will retry after 809.088382ms: waiting for machine to come up
	I0419 20:05:17.198680  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:17.199231  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:17.199267  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:17.199167  389598 retry.go:31] will retry after 748.965459ms: waiting for machine to come up
	I0419 20:05:17.950319  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:17.950812  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:17.950841  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:17.950751  389598 retry.go:31] will retry after 1.000266671s: waiting for machine to come up
	I0419 20:05:18.952983  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:18.953501  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:18.953533  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:18.953444  389598 retry.go:31] will retry after 1.410601616s: waiting for machine to come up
	I0419 20:05:20.365780  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:20.366286  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:20.366306  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:20.366237  389598 retry.go:31] will retry after 1.859485208s: waiting for machine to come up
	I0419 20:05:22.227079  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:22.227659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:22.227695  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:22.227587  389598 retry.go:31] will retry after 2.263798453s: waiting for machine to come up
	I0419 20:05:24.492659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:24.493053  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:24.493085  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:24.492990  389598 retry.go:31] will retry after 3.471867165s: waiting for machine to come up
	I0419 20:05:27.966230  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:27.966720  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:27.966748  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:27.966678  389598 retry.go:31] will retry after 3.751116138s: waiting for machine to come up
	I0419 20:05:31.719321  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:31.719645  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:31.719670  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:31.719587  389598 retry.go:31] will retry after 5.08434409s: waiting for machine to come up
	I0419 20:05:36.805700  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.806130  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has current primary IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.806145  388805 main.go:141] libmachine: (ha-423356-m03) Found IP for machine: 192.168.39.111
	I0419 20:05:36.806165  388805 main.go:141] libmachine: (ha-423356-m03) Reserving static IP address...
	I0419 20:05:36.806532  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find host DHCP lease matching {name: "ha-423356-m03", mac: "52:54:00:fc:cf:fe", ip: "192.168.39.111"} in network mk-ha-423356
	I0419 20:05:36.880933  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Getting to WaitForSSH function...
	I0419 20:05:36.880994  388805 main.go:141] libmachine: (ha-423356-m03) Reserved static IP address: 192.168.39.111
	I0419 20:05:36.881010  388805 main.go:141] libmachine: (ha-423356-m03) Waiting for SSH to be available...
	I0419 20:05:36.883767  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.884211  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356
	I0419 20:05:36.884235  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find defined IP address of network mk-ha-423356 interface with MAC address 52:54:00:fc:cf:fe
	I0419 20:05:36.884394  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH client type: external
	I0419 20:05:36.884422  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa (-rw-------)
	I0419 20:05:36.884453  388805 main.go:141] libmachine: (ha-423356-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:05:36.884469  388805 main.go:141] libmachine: (ha-423356-m03) DBG | About to run SSH command:
	I0419 20:05:36.884490  388805 main.go:141] libmachine: (ha-423356-m03) DBG | exit 0
	I0419 20:05:36.888311  388805 main.go:141] libmachine: (ha-423356-m03) DBG | SSH cmd err, output: exit status 255: 
	I0419 20:05:36.888330  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0419 20:05:36.888338  388805 main.go:141] libmachine: (ha-423356-m03) DBG | command : exit 0
	I0419 20:05:36.888348  388805 main.go:141] libmachine: (ha-423356-m03) DBG | err     : exit status 255
	I0419 20:05:36.888355  388805 main.go:141] libmachine: (ha-423356-m03) DBG | output  : 
	I0419 20:05:39.890772  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Getting to WaitForSSH function...
	I0419 20:05:39.893113  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:39.893535  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:39.893565  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:39.893673  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH client type: external
	I0419 20:05:39.893695  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa (-rw-------)
	I0419 20:05:39.893730  388805 main.go:141] libmachine: (ha-423356-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:05:39.893756  388805 main.go:141] libmachine: (ha-423356-m03) DBG | About to run SSH command:
	I0419 20:05:39.893781  388805 main.go:141] libmachine: (ha-423356-m03) DBG | exit 0
	I0419 20:05:40.020539  388805 main.go:141] libmachine: (ha-423356-m03) DBG | SSH cmd err, output: <nil>: 
	I0419 20:05:40.020828  388805 main.go:141] libmachine: (ha-423356-m03) KVM machine creation complete!
	I0419 20:05:40.021183  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:40.021776  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:40.021991  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:40.022177  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:05:40.022201  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:05:40.023490  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:05:40.023503  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:05:40.023510  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:05:40.023515  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.025615  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.026055  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.026089  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.026197  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.026375  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.026511  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.026639  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.026792  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.027063  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.027083  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:05:40.140602  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:05:40.140626  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:05:40.140652  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.143644  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.144040  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.144070  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.144270  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.144501  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.144696  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.144866  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.145067  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.145296  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.145312  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:05:40.257769  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:05:40.257847  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:05:40.257861  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:05:40.257872  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.258170  388805 buildroot.go:166] provisioning hostname "ha-423356-m03"
	I0419 20:05:40.258202  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.258439  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.260916  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.261286  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.261316  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.261426  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.261619  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.261766  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.261941  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.262110  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.262280  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.262291  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356-m03 && echo "ha-423356-m03" | sudo tee /etc/hostname
	I0419 20:05:40.388355  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356-m03
	
	I0419 20:05:40.388397  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.391284  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.391629  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.391666  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.391858  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.392071  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.392226  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.392363  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.392509  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.392738  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.392756  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:05:40.514848  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:05:40.514881  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:05:40.514903  388805 buildroot.go:174] setting up certificates
	I0419 20:05:40.514914  388805 provision.go:84] configureAuth start
	I0419 20:05:40.514932  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.515217  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:40.518036  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.518463  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.518504  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.518700  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.520940  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.521297  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.521326  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.521493  388805 provision.go:143] copyHostCerts
	I0419 20:05:40.521530  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:05:40.521571  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:05:40.521583  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:05:40.521665  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:05:40.521767  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:05:40.521795  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:05:40.521801  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:05:40.521838  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:05:40.521900  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:05:40.521925  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:05:40.521937  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:05:40.521978  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:05:40.522058  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356-m03 san=[127.0.0.1 192.168.39.111 ha-423356-m03 localhost minikube]
	I0419 20:05:40.787540  388805 provision.go:177] copyRemoteCerts
	I0419 20:05:40.787603  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:05:40.787628  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.790222  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.790608  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.790640  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.790776  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.791016  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.791161  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.791351  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:40.879307  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:05:40.879377  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:05:40.906826  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:05:40.906923  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 20:05:40.934396  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:05:40.934473  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:05:40.960610  388805 provision.go:87] duration metric: took 445.681947ms to configureAuth
	I0419 20:05:40.960655  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:05:40.960860  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:40.960963  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.963699  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.964104  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.964130  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.964298  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.964499  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.964684  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.964821  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.965009  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.965213  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.965235  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:05:41.248937  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:05:41.248972  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:05:41.248981  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetURL
	I0419 20:05:41.250633  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using libvirt version 6000000
	I0419 20:05:41.252996  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.253382  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.253411  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.253641  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:05:41.253660  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:05:41.253668  388805 client.go:171] duration metric: took 28.720141499s to LocalClient.Create
	I0419 20:05:41.253695  388805 start.go:167] duration metric: took 28.7202136s to libmachine.API.Create "ha-423356"
	I0419 20:05:41.253705  388805 start.go:293] postStartSetup for "ha-423356-m03" (driver="kvm2")
	I0419 20:05:41.253715  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:05:41.253744  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.253968  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:05:41.253998  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.256313  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.256601  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.256649  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.256901  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.257078  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.257252  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.257418  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.343433  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:05:41.348557  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:05:41.348584  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:05:41.348686  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:05:41.348782  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:05:41.348800  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:05:41.348912  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:05:41.359212  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:05:41.388584  388805 start.go:296] duration metric: took 134.857661ms for postStartSetup
	I0419 20:05:41.388680  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:41.389390  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:41.391939  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.392250  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.392283  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.392580  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:41.392808  388805 start.go:128] duration metric: took 28.877809223s to createHost
	I0419 20:05:41.392835  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.395173  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.395609  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.395637  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.395781  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.395959  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.396115  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.396248  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.396443  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:41.396666  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:41.396683  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:05:41.509662  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557141.485721310
	
	I0419 20:05:41.509689  388805 fix.go:216] guest clock: 1713557141.485721310
	I0419 20:05:41.509699  388805 fix.go:229] Guest: 2024-04-19 20:05:41.48572131 +0000 UTC Remote: 2024-04-19 20:05:41.392822689 +0000 UTC m=+158.602347846 (delta=92.898621ms)
	I0419 20:05:41.509721  388805 fix.go:200] guest clock delta is within tolerance: 92.898621ms
	I0419 20:05:41.509728  388805 start.go:83] releasing machines lock for "ha-423356-m03", held for 28.994892092s
	I0419 20:05:41.509750  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.510026  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:41.513044  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.513458  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.513494  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.515947  388805 out.go:177] * Found network options:
	I0419 20:05:41.517336  388805 out.go:177]   - NO_PROXY=192.168.39.7,192.168.39.121
	W0419 20:05:41.518516  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 20:05:41.518535  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:05:41.518551  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519149  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519366  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519457  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:05:41.519486  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	W0419 20:05:41.519588  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 20:05:41.519613  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:05:41.519696  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:05:41.519721  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.522051  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522322  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522391  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.522417  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522571  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.522659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.522688  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522740  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.522831  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.522911  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.522993  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.523052  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.523122  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.523253  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.764601  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:05:41.770852  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:05:41.770930  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:05:41.788130  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:05:41.788155  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:05:41.788220  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:05:41.804494  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:05:41.819899  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:05:41.819979  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:05:41.835050  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:05:41.849817  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:05:41.977220  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:05:42.153349  388805 docker.go:233] disabling docker service ...
	I0419 20:05:42.153424  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:05:42.170662  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:05:42.185440  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:05:42.306766  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:05:42.436811  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:05:42.452596  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:05:42.471917  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:05:42.471990  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.483281  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:05:42.483354  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.494602  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.507226  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.520991  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:05:42.533385  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.545535  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.565088  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.577308  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:05:42.589447  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:05:42.589517  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:05:42.603436  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:05:42.614245  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:05:42.738267  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:05:42.882243  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:05:42.882336  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:05:42.887517  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:05:42.887568  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:05:42.891669  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:05:42.933597  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:05:42.933682  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:05:42.964730  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:05:42.996171  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:05:42.997531  388805 out.go:177]   - env NO_PROXY=192.168.39.7
	I0419 20:05:42.998808  388805 out.go:177]   - env NO_PROXY=192.168.39.7,192.168.39.121
	I0419 20:05:42.999904  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:43.003049  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:43.003525  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:43.003550  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:43.003795  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:05:43.008264  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:05:43.020924  388805 mustload.go:65] Loading cluster: ha-423356
	I0419 20:05:43.021170  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:43.021441  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:43.021492  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:43.036579  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0419 20:05:43.037129  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:43.037612  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:43.037634  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:43.037966  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:43.038140  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:05:43.039724  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:05:43.040044  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:43.040085  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:43.055254  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0419 20:05:43.055677  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:43.056206  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:43.056232  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:43.056561  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:43.056791  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:05:43.056985  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.111
	I0419 20:05:43.056998  388805 certs.go:194] generating shared ca certs ...
	I0419 20:05:43.057017  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.057176  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:05:43.057235  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:05:43.057251  388805 certs.go:256] generating profile certs ...
	I0419 20:05:43.057361  388805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:05:43.057396  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18
	I0419 20:05:43.057421  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.111 192.168.39.254]
	I0419 20:05:43.213129  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 ...
	I0419 20:05:43.213164  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18: {Name:mk07affa39edd4b79403c8ce6388763e4d72916b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.213357  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18 ...
	I0419 20:05:43.213375  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18: {Name:mk825efea1197d117993825e19ca076825193566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.213478  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:05:43.213618  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:05:43.213747  388805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:05:43.213764  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:05:43.213777  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:05:43.213790  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:05:43.213802  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:05:43.213815  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:05:43.213827  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:05:43.213838  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:05:43.213850  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:05:43.213908  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:05:43.213938  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:05:43.213948  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:05:43.213969  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:05:43.213990  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:05:43.214011  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:05:43.214046  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:05:43.214071  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.214085  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.214097  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.214138  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:05:43.217578  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:43.217973  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:05:43.217999  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:43.218239  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:05:43.218460  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:05:43.218629  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:05:43.218772  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:05:43.297003  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 20:05:43.306332  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 20:05:43.319708  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 20:05:43.324580  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0419 20:05:43.337027  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 20:05:43.341928  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 20:05:43.356449  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 20:05:43.360966  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 20:05:43.377564  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 20:05:43.382865  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 20:05:43.403933  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 20:05:43.409620  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0419 20:05:43.421889  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:05:43.450062  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:05:43.476554  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:05:43.503266  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:05:43.529298  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0419 20:05:43.561098  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:05:43.589306  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:05:43.615565  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:05:43.643151  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:05:43.671144  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:05:43.701196  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:05:43.726835  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 20:05:43.744776  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0419 20:05:43.763639  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 20:05:43.781653  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 20:05:43.800969  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 20:05:43.818673  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0419 20:05:43.836373  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0419 20:05:43.853934  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:05:43.859990  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:05:43.870978  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.875669  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.875747  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.882009  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:05:43.893299  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:05:43.908975  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.916018  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.916089  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.922820  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:05:43.934423  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:05:43.946418  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.951554  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.951609  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.958159  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:05:43.970155  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:05:43.974749  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:05:43.974815  388805 kubeadm.go:928] updating node {m03 192.168.39.111 8443 v1.30.0 crio true true} ...
	I0419 20:05:43.974921  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:05:43.974962  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:05:43.975012  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:05:43.992465  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:05:43.992534  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:05:43.992581  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:05:44.004253  388805 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 20:05:44.004321  388805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 20:05:44.015815  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 20:05:44.015850  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:05:44.015866  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0419 20:05:44.015866  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0419 20:05:44.015891  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:05:44.015912  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:05:44.015922  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:05:44.015962  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:05:44.034224  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:05:44.034240  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 20:05:44.034271  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 20:05:44.034314  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 20:05:44.034333  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:05:44.034341  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 20:05:44.065796  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 20:05:44.065843  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 20:05:45.042494  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 20:05:45.054258  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 20:05:45.072558  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:05:45.091171  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:05:45.108420  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:05:45.112514  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:05:45.125614  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:05:45.264021  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:05:45.285069  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:05:45.285544  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:45.285591  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:45.301995  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0419 20:05:45.302584  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:45.303129  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:45.303164  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:45.303572  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:45.303824  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:05:45.303994  388805 start.go:316] joinCluster: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:05:45.304168  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 20:05:45.304190  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:05:45.307563  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:45.307996  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:05:45.308026  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:45.308208  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:05:45.308426  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:05:45.308597  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:05:45.308779  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:05:45.683924  388805 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:05:45.683974  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2828w.mz4l9arxpw0m036n --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0419 20:06:13.922617  388805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2828w.mz4l9arxpw0m036n --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (28.238613784s)
	I0419 20:06:13.922666  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 20:06:14.536987  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356-m03 minikube.k8s.io/updated_at=2024_04_19T20_06_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=false
	I0419 20:06:14.688778  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-423356-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 20:06:14.812081  388805 start.go:318] duration metric: took 29.508082602s to joinCluster
	I0419 20:06:14.812171  388805 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:06:14.814021  388805 out.go:177] * Verifying Kubernetes components...
	I0419 20:06:14.812485  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:06:14.815801  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:06:15.101973  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:06:15.136788  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:06:15.137176  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 20:06:15.137278  388805 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0419 20:06:15.137619  388805 node_ready.go:35] waiting up to 6m0s for node "ha-423356-m03" to be "Ready" ...
	I0419 20:06:15.137736  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:15.137750  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:15.137762  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:15.137768  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:15.142385  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:15.637991  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:15.638020  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:15.638031  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:15.638037  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:15.641820  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:16.137914  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:16.137942  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:16.137961  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:16.137966  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:16.143360  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:16.638367  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:16.638390  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:16.638400  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:16.638406  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:16.641794  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:17.138709  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:17.138738  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:17.138750  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:17.138755  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:17.142398  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:17.143298  388805 node_ready.go:53] node "ha-423356-m03" has status "Ready":"False"
	I0419 20:06:17.637854  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:17.637888  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:17.637900  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:17.637906  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:17.641734  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:18.137906  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:18.137932  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:18.137941  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:18.137945  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:18.141846  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:18.638458  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:18.638484  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:18.638492  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:18.638497  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:18.642604  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:19.138705  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:19.138734  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:19.138746  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:19.138756  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:19.142297  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:19.143592  388805 node_ready.go:53] node "ha-423356-m03" has status "Ready":"False"
	I0419 20:06:19.638522  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:19.638557  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:19.638568  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:19.638573  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:19.643536  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:20.138147  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:20.138171  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:20.138181  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:20.138190  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:20.142289  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:20.637970  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:20.637994  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:20.638003  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:20.638007  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:20.642092  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.137947  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:21.137985  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.137998  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.138003  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.141854  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.638775  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:21.638801  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.638811  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.638816  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.642170  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.642763  388805 node_ready.go:49] node "ha-423356-m03" has status "Ready":"True"
	I0419 20:06:21.642783  388805 node_ready.go:38] duration metric: took 6.505137736s for node "ha-423356-m03" to be "Ready" ...
	I0419 20:06:21.642794  388805 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:06:21.642866  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:21.642906  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.642921  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.642930  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.650185  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:06:21.658714  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.658834  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9td9f
	I0419 20:06:21.658848  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.658858  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.658867  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.663279  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.663983  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.664004  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.664013  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.664019  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.667036  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.667836  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.667852  388805 pod_ready.go:81] duration metric: took 9.105443ms for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.667871  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.667922  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rr7zk
	I0419 20:06:21.667930  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.667937  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.667940  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.670825  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:21.671552  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.671570  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.671581  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.671589  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.675714  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.676866  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.676883  388805 pod_ready.go:81] duration metric: took 9.003622ms for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.676900  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.676961  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356
	I0419 20:06:21.676973  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.676981  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.676988  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.680352  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.681296  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.681310  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.681315  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.681319  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.683782  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:21.684331  388805 pod_ready.go:92] pod "etcd-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.684350  388805 pod_ready.go:81] duration metric: took 7.441096ms for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.684392  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.684465  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:06:21.684475  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.684484  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.684501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.688207  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.689138  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:21.689154  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.689161  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.689169  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.692580  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.696141  388805 pod_ready.go:92] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.696164  388805 pod_ready.go:81] duration metric: took 11.76079ms for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.696176  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.838864  388805 request.go:629] Waited for 142.603813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:21.838942  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:21.838950  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.838961  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.838972  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.842639  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.039398  388805 request.go:629] Waited for 195.81871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.039463  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.039468  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.039476  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.039480  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.043227  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.239082  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:22.239105  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.239116  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.239123  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.244515  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:22.439059  388805 request.go:629] Waited for 193.350851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.439118  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.439122  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.439130  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.439135  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.442870  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.697216  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:22.697243  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.697251  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.697257  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.700691  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.839508  388805 request.go:629] Waited for 138.045161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.839568  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.839573  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.839581  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.839586  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.843327  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.196819  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:23.196848  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.196858  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.196864  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.204908  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:06:23.239213  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:23.239266  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.239279  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.239289  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.243211  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.696383  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:23.696410  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.696419  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.696424  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.699959  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.700815  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:23.700831  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.700838  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.700844  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.703902  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.704480  388805 pod_ready.go:102] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"False"
	I0419 20:06:24.196843  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:24.196867  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.196876  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.196881  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.201207  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:24.202431  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:24.202459  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.202471  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.202477  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.205617  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:24.696695  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:24.696722  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.696734  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.696743  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.700012  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:24.700679  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:24.700700  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.700712  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.700715  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.703606  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:25.197107  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:25.197133  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.197140  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.197144  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.201265  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:25.202296  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:25.202317  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.202326  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.202329  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.205409  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:25.696947  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:25.696973  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.696979  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.696983  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.701210  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:25.702108  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:25.702125  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.702132  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.702136  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.705573  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:25.706268  388805 pod_ready.go:102] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"False"
	I0419 20:06:26.197306  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:26.197334  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.197344  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.197348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.200859  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:26.201526  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:26.201540  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.201546  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.201550  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.204378  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:26.697343  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:26.697369  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.697380  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.697386  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.702462  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:26.703176  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:26.703190  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.703199  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.703204  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.706216  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:27.197350  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:27.197400  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.197419  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.197431  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.203530  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:27.204368  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.204390  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.204402  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.204408  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.208863  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:27.209532  388805 pod_ready.go:92] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.209552  388805 pod_ready.go:81] duration metric: took 5.513367498s for pod "etcd-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.209575  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.209640  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356
	I0419 20:06:27.209650  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.209660  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.209668  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.212857  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.213768  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:27.213787  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.213798  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.213803  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.218187  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:27.218780  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.218807  388805 pod_ready.go:81] duration metric: took 9.222056ms for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.218820  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.218989  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m02
	I0419 20:06:27.219011  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.219019  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.219024  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.222546  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.239149  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:27.239168  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.239181  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.239188  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.242216  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.242923  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.242958  388805 pod_ready.go:81] duration metric: took 24.128714ms for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.242972  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.439427  388805 request.go:629] Waited for 196.369758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.439530  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.439536  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.439544  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.439550  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.442868  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.639233  388805 request.go:629] Waited for 195.590331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.639302  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.639308  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.639315  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.639320  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.642879  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.838994  388805 request.go:629] Waited for 95.280816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.839077  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.839086  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.839095  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.839099  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.842612  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.038842  388805 request.go:629] Waited for 195.230671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.038914  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.038926  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.038938  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.038950  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.042660  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.243790  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:28.243819  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.243830  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.243834  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.247999  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:28.439103  388805 request.go:629] Waited for 190.38294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.439191  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.439200  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.439214  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.439225  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.442595  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.743724  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:28.743751  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.743762  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.743773  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.747830  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:28.839184  388805 request.go:629] Waited for 90.19036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.839244  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.839249  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.839259  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.839265  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.843324  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:29.243785  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:29.243830  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.243840  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.243858  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.247970  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:29.248804  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:29.248821  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.248831  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.248839  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.251790  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:29.252447  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:29.252466  388805 pod_ready.go:81] duration metric: took 2.009486005s for pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.252480  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.438857  388805 request.go:629] Waited for 186.304892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:06:29.438943  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:06:29.438951  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.438961  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.438966  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.442496  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:29.639550  388805 request.go:629] Waited for 196.413304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:29.639614  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:29.639620  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.639628  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.639634  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.642803  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:29.643484  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:29.643504  388805 pod_ready.go:81] duration metric: took 391.012562ms for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.643514  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.839701  388805 request.go:629] Waited for 196.078245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:06:29.839774  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:06:29.839782  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.839794  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.839802  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.842904  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.039132  388805 request.go:629] Waited for 195.278783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:30.039190  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:30.039195  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.039203  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.039216  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.042868  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.043570  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:30.043593  388805 pod_ready.go:81] duration metric: took 400.07099ms for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:30.043611  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:30.239724  388805 request.go:629] Waited for 196.012378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.239822  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.239834  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.239845  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.239853  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.243877  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:30.439197  388805 request.go:629] Waited for 194.264801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.439261  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.439267  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.439277  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.439284  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.442927  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.639073  388805 request.go:629] Waited for 94.293643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.639157  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.639165  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.639173  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.639180  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.643288  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:30.839410  388805 request.go:629] Waited for 195.352513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.839470  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.839475  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.839483  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.839487  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.842889  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.044687  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:31.044711  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.044720  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.044726  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.048666  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.238890  388805 request.go:629] Waited for 189.309056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.238973  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.238981  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.238992  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.239001  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.242875  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.544757  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:31.544785  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.544795  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.544799  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.548274  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.639566  388805 request.go:629] Waited for 90.266216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.639631  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.639636  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.639643  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.639647  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.649493  388805 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 20:06:32.044448  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:32.044474  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.044485  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.044491  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.048414  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.049199  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:32.049217  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.049225  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.049229  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.052289  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.052919  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.052947  388805 pod_ready.go:81] duration metric: took 2.009315041s for pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.052961  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.239258  388805 request.go:629] Waited for 186.223997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:06:32.239366  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:06:32.239372  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.239380  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.239388  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.243705  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:32.438843  388805 request.go:629] Waited for 194.326978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:32.438925  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:32.438930  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.438938  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.438943  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.442594  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.443228  388805 pod_ready.go:92] pod "kube-proxy-chd2r" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.443248  388805 pod_ready.go:81] duration metric: took 390.279901ms for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.443259  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.639735  388805 request.go:629] Waited for 196.372073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:06:32.639802  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:06:32.639807  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.639815  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.639820  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.643622  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.839386  388805 request.go:629] Waited for 194.972107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:32.839476  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:32.839485  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.839500  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.839508  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.842947  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.843684  388805 pod_ready.go:92] pod "kube-proxy-d56ch" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.843704  388805 pod_ready.go:81] duration metric: took 400.438188ms for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.843713  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sr4gd" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.039392  388805 request.go:629] Waited for 195.577301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sr4gd
	I0419 20:06:33.039484  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sr4gd
	I0419 20:06:33.039491  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.039502  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.039512  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.043062  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.239750  388805 request.go:629] Waited for 195.841277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:33.239848  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:33.239859  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.239871  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.239882  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.243050  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.243948  388805 pod_ready.go:92] pod "kube-proxy-sr4gd" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:33.243971  388805 pod_ready.go:81] duration metric: took 400.251464ms for pod "kube-proxy-sr4gd" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.243984  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.439149  388805 request.go:629] Waited for 195.06327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:06:33.439223  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:06:33.439232  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.439243  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.439251  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.443391  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:33.639511  388805 request.go:629] Waited for 195.305289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:33.639579  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:33.639584  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.639592  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.639600  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.642892  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.643628  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:33.643652  388805 pod_ready.go:81] duration metric: took 399.660005ms for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.643665  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.839721  388805 request.go:629] Waited for 195.952469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:06:33.839791  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:06:33.839796  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.839804  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.839808  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.843854  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:34.039404  388805 request.go:629] Waited for 194.381115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:34.039466  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:34.039473  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.039484  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.039499  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.043062  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.043739  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:34.043758  388805 pod_ready.go:81] duration metric: took 400.085128ms for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.043770  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.238799  388805 request.go:629] Waited for 194.937207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m03
	I0419 20:06:34.238869  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m03
	I0419 20:06:34.238882  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.238894  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.238904  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.242497  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.438822  388805 request.go:629] Waited for 195.323331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:34.438908  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:34.438914  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.438923  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.438930  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.444594  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:34.445423  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:34.445457  388805 pod_ready.go:81] duration metric: took 401.6787ms for pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.445473  388805 pod_ready.go:38] duration metric: took 12.802663415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:06:34.445496  388805 api_server.go:52] waiting for apiserver process to appear ...
	I0419 20:06:34.445573  388805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:06:34.461697  388805 api_server.go:72] duration metric: took 19.649486037s to wait for apiserver process to appear ...
	I0419 20:06:34.461723  388805 api_server.go:88] waiting for apiserver healthz status ...
	I0419 20:06:34.461747  388805 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0419 20:06:34.467927  388805 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0419 20:06:34.468027  388805 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0419 20:06:34.468041  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.468053  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.468060  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.468935  388805 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 20:06:34.469021  388805 api_server.go:141] control plane version: v1.30.0
	I0419 20:06:34.469040  388805 api_server.go:131] duration metric: took 7.309501ms to wait for apiserver health ...
	I0419 20:06:34.469053  388805 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 20:06:34.639464  388805 request.go:629] Waited for 170.339784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:34.639548  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:34.639554  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.639562  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.639570  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.646376  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:34.652584  388805 system_pods.go:59] 24 kube-system pods found
	I0419 20:06:34.652613  388805 system_pods.go:61] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:06:34.652619  388805 system_pods.go:61] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:06:34.652623  388805 system_pods.go:61] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:06:34.652627  388805 system_pods.go:61] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:06:34.652642  388805 system_pods.go:61] "etcd-ha-423356-m03" [71cf5f8a-a1d9-4b63-9ea9-6613f414aef2] Running
	I0419 20:06:34.652648  388805 system_pods.go:61] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:06:34.652653  388805 system_pods.go:61] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:06:34.652658  388805 system_pods.go:61] "kindnet-fkd5h" [51c38fb9-3969-4d58-9d80-a80e783a27de] Running
	I0419 20:06:34.652663  388805 system_pods.go:61] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:06:34.652669  388805 system_pods.go:61] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:06:34.652674  388805 system_pods.go:61] "kube-apiserver-ha-423356-m03" [97f56f0f-596b-4afb-a960-c2cb16cc57da] Running
	I0419 20:06:34.652677  388805 system_pods.go:61] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:06:34.652685  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:06:34.652688  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m03" [b47707f2-70d7-4e46-84ff-3c16267a050c] Running
	I0419 20:06:34.652691  388805 system_pods.go:61] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:06:34.652694  388805 system_pods.go:61] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:06:34.652700  388805 system_pods.go:61] "kube-proxy-sr4gd" [5d9df920-7b11-4ba5-8811-1aacbc7aa08b] Running
	I0419 20:06:34.652702  388805 system_pods.go:61] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:06:34.652705  388805 system_pods.go:61] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:06:34.652708  388805 system_pods.go:61] "kube-scheduler-ha-423356-m03" [adce0845-d4c7-4a4f-ae6b-013b3fa69963] Running
	I0419 20:06:34.652711  388805 system_pods.go:61] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:06:34.652715  388805 system_pods.go:61] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:06:34.652720  388805 system_pods.go:61] "kube-vip-ha-423356-m03" [742e23a9-c944-4710-a12f-f76f1ea533e9] Running
	I0419 20:06:34.652722  388805 system_pods.go:61] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:06:34.652730  388805 system_pods.go:74] duration metric: took 183.666504ms to wait for pod list to return data ...
	I0419 20:06:34.652741  388805 default_sa.go:34] waiting for default service account to be created ...
	I0419 20:06:34.839213  388805 request.go:629] Waited for 186.394288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:06:34.839326  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:06:34.839342  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.839351  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.839357  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.843181  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.843319  388805 default_sa.go:45] found service account: "default"
	I0419 20:06:34.843338  388805 default_sa.go:55] duration metric: took 190.589653ms for default service account to be created ...
	I0419 20:06:34.843354  388805 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 20:06:35.039124  388805 request.go:629] Waited for 195.686214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:35.039191  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:35.039197  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:35.039206  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:35.039211  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:35.045702  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:35.052867  388805 system_pods.go:86] 24 kube-system pods found
	I0419 20:06:35.052895  388805 system_pods.go:89] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:06:35.052901  388805 system_pods.go:89] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:06:35.052905  388805 system_pods.go:89] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:06:35.052909  388805 system_pods.go:89] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:06:35.052913  388805 system_pods.go:89] "etcd-ha-423356-m03" [71cf5f8a-a1d9-4b63-9ea9-6613f414aef2] Running
	I0419 20:06:35.052917  388805 system_pods.go:89] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:06:35.052921  388805 system_pods.go:89] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:06:35.052925  388805 system_pods.go:89] "kindnet-fkd5h" [51c38fb9-3969-4d58-9d80-a80e783a27de] Running
	I0419 20:06:35.052929  388805 system_pods.go:89] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:06:35.052935  388805 system_pods.go:89] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:06:35.052939  388805 system_pods.go:89] "kube-apiserver-ha-423356-m03" [97f56f0f-596b-4afb-a960-c2cb16cc57da] Running
	I0419 20:06:35.052943  388805 system_pods.go:89] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:06:35.052951  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:06:35.052955  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m03" [b47707f2-70d7-4e46-84ff-3c16267a050c] Running
	I0419 20:06:35.052967  388805 system_pods.go:89] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:06:35.052971  388805 system_pods.go:89] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:06:35.052974  388805 system_pods.go:89] "kube-proxy-sr4gd" [5d9df920-7b11-4ba5-8811-1aacbc7aa08b] Running
	I0419 20:06:35.052981  388805 system_pods.go:89] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:06:35.052986  388805 system_pods.go:89] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:06:35.052993  388805 system_pods.go:89] "kube-scheduler-ha-423356-m03" [adce0845-d4c7-4a4f-ae6b-013b3fa69963] Running
	I0419 20:06:35.052996  388805 system_pods.go:89] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:06:35.053000  388805 system_pods.go:89] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:06:35.053006  388805 system_pods.go:89] "kube-vip-ha-423356-m03" [742e23a9-c944-4710-a12f-f76f1ea533e9] Running
	I0419 20:06:35.053009  388805 system_pods.go:89] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:06:35.053016  388805 system_pods.go:126] duration metric: took 209.65671ms to wait for k8s-apps to be running ...
	I0419 20:06:35.053025  388805 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 20:06:35.053072  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:06:35.068894  388805 system_svc.go:56] duration metric: took 15.857445ms WaitForService to wait for kubelet
	I0419 20:06:35.068923  388805 kubeadm.go:576] duration metric: took 20.256716597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:06:35.068945  388805 node_conditions.go:102] verifying NodePressure condition ...
	I0419 20:06:35.239218  388805 request.go:629] Waited for 170.169877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0419 20:06:35.239276  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0419 20:06:35.239281  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:35.239289  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:35.239294  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:35.243544  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:35.244709  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244732  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244750  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244754  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244758  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244761  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244765  388805 node_conditions.go:105] duration metric: took 175.814541ms to run NodePressure ...
	I0419 20:06:35.244777  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:06:35.244801  388805 start.go:254] writing updated cluster config ...
	I0419 20:06:35.245141  388805 ssh_runner.go:195] Run: rm -f paused
	I0419 20:06:35.298665  388805 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 20:06:35.300894  388805 out.go:177] * Done! kubectl is now configured to use "ha-423356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.115775270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557408115753031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d2ae85e-32ea-4917-9825-f08951025be1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.116287493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68911857-bbe7-41d1-9c2b-f7ddbd6efefc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.116367433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68911857-bbe7-41d1-9c2b-f7ddbd6efefc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.116615986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68911857-bbe7-41d1-9c2b-f7ddbd6efefc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.161408731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8003c83-5b25-48a8-b801-42b873485302 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.161511134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8003c83-5b25-48a8-b801-42b873485302 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.163318941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51ad6eb2-1b2c-4ee0-ab54-84c4ed65b60c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.164000510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557408163973875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51ad6eb2-1b2c-4ee0-ab54-84c4ed65b60c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.164765642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad720c1b-59e9-4f09-b497-2bfeafa23d2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.164824261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad720c1b-59e9-4f09-b497-2bfeafa23d2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.165042628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad720c1b-59e9-4f09-b497-2bfeafa23d2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.209316936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b99b9a6d-8d80-4911-89f8-5e4bc406e35f name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.209393029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b99b9a6d-8d80-4911-89f8-5e4bc406e35f name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.210533394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2653b646-b6a5-4e93-ac9b-6f13a7402a97 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.211039533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557408211015988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2653b646-b6a5-4e93-ac9b-6f13a7402a97 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.211618538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=819bcd15-7091-408e-961c-fee21c459a82 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.211670613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=819bcd15-7091-408e-961c-fee21c459a82 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.211926373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=819bcd15-7091-408e-961c-fee21c459a82 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.257724703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97c38eec-bda7-48b8-ba33-3205a40f68ef name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.257832805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97c38eec-bda7-48b8-ba33-3205a40f68ef name=/runtime.v1.RuntimeService/Version
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.259692283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fc350b9-ca8f-4980-bac2-7ac0866a7249 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.260422630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557408260324973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fc350b9-ca8f-4980-bac2-7ac0866a7249 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.261012418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a819563-c3b3-47f7-ad99-f8afb32bee09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.261163378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a819563-c3b3-47f7-ad99-f8afb32bee09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:10:08 ha-423356 crio[682]: time="2024-04-19 20:10:08.261545275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a819563-c3b3-47f7-ad99-f8afb32bee09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b80b69bd108f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   027b57294cfbd       busybox-fc5497c4f-wqfc4
	dcfa7c435542c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   14c798e2b76b0       coredns-7db6d8ff4d-9td9f
	3b7e6d8be93a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8f591c7ca632f       storage-provisioner
	2382f52abc364       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   8a34b24c4a7dd       coredns-7db6d8ff4d-rr7zk
	5b9312aae8712       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   96603b0da4128       kindnet-bqwfr
	b5377046480e9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   a9af78af7cd87       kube-proxy-chd2r
	56f50a4f74759       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   cbc67ae14f71d       kube-vip-ha-423356
	e7d5dc9bb5064       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   80c3450b238ad       kube-controller-manager-ha-423356
	7f1baf88d5884       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   68e93a81da913       kube-scheduler-ha-423356
	6765b5ae2f794       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   ebb98898864fe       kube-apiserver-ha-423356
	1572778d3f528       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   9ba5078b4acef       etcd-ha-423356
	
	
	==> coredns [2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24] <==
	[INFO] 10.244.2.2:33276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001354s
	[INFO] 10.244.2.2:40300 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253575s
	[INFO] 10.244.2.2:56973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142218s
	[INFO] 10.244.2.2:35913 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176895s
	[INFO] 10.244.1.2:40511 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132923s
	[INFO] 10.244.1.2:34902 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201571s
	[INFO] 10.244.1.2:53225 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001465991s
	[INFO] 10.244.1.2:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258304s
	[INFO] 10.244.1.2:59316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128123s
	[INFO] 10.244.1.2:48977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110722s
	[INFO] 10.244.0.4:40375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793494s
	[INFO] 10.244.0.4:60622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049591s
	[INFO] 10.244.0.4:34038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003778s
	[INFO] 10.244.0.4:51412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043214s
	[INFO] 10.244.0.4:56955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042946s
	[INFO] 10.244.2.2:46864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134976s
	[INFO] 10.244.2.2:34230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011483s
	[INFO] 10.244.1.2:38189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097409s
	[INFO] 10.244.1.2:33041 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080538s
	[INFO] 10.244.0.4:37791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018566s
	[INFO] 10.244.0.4:46485 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061131s
	[INFO] 10.244.0.4:50872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086293s
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142168s
	[INFO] 10.244.1.2:55061 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177752s
	[INFO] 10.244.0.4:44369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008812s
	
	
	==> coredns [dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5] <==
	[INFO] 10.244.0.4:49749 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472384s
	[INFO] 10.244.0.4:55334 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002090348s
	[INFO] 10.244.2.2:56357 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003829125s
	[INFO] 10.244.2.2:35752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290736s
	[INFO] 10.244.2.2:48589 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003091553s
	[INFO] 10.244.2.2:49259 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138019s
	[INFO] 10.244.1.2:50375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000377277s
	[INFO] 10.244.1.2:43502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916758s
	[INFO] 10.244.0.4:50440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109012s
	[INFO] 10.244.0.4:50457 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001351323s
	[INFO] 10.244.0.4:57273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119319s
	[INFO] 10.244.2.2:49275 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210181s
	[INFO] 10.244.2.2:41514 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192084s
	[INFO] 10.244.1.2:56219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000465859s
	[INFO] 10.244.1.2:60572 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114905s
	[INFO] 10.244.0.4:52874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098566s
	[INFO] 10.244.2.2:47734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249839s
	[INFO] 10.244.2.2:50981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179648s
	[INFO] 10.244.2.2:34738 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109005s
	[INFO] 10.244.1.2:37966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181053s
	[INFO] 10.244.1.2:48636 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116821s
	[INFO] 10.244.1.2:52580 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000260337s
	[INFO] 10.244.0.4:43327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088111s
	[INFO] 10.244.0.4:47823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105899s
	[INFO] 10.244.0.4:41223 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050192s
	
	
	==> describe nodes <==
	Name:               ha-423356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:10:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-423356
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133e52820e114c7aa16933b82eb1ac6a
	  System UUID:                133e5282-0e11-4c7a-a169-33b82eb1ac6a
	  Boot ID:                    752cc004-2412-44ee-9782-2d20c1c3993d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqfc4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-9td9f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 coredns-7db6d8ff4d-rr7zk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m11s
	  kube-system                 etcd-ha-423356                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 kindnet-bqwfr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-423356             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-ha-423356    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-chd2r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-423356             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-vip-ha-423356                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-423356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-423356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-423356 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-423356 status is now: NodeReady
	  Normal  RegisteredNode           4m59s  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal  RegisteredNode           3m39s  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	
	
	Name:               ha-423356-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:04:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:07:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-423356-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 346b871eba5f43789a16ce3dbbb4ec2c
	  System UUID:                346b871e-ba5f-4378-9a16-ce3dbbb4ec2c
	  Boot ID:                    c563aa8d-17e5-4d9b-a5f2-9aac493d81ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fq5c2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-423356-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-7ktc2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m18s
	  kube-system                 kube-apiserver-ha-423356-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-ha-423356-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-d56ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-scheduler-ha-423356-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-423356-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-423356-m02 status is now: NodeNotReady
	
	
	Name:               ha-423356-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_06_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:06:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-423356-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98c76a7ef5ce4a80bed88d9102770ac6
	  System UUID:                98c76a7e-f5ce-4a80-bed8-8d9102770ac6
	  Boot ID:                    a8bf7a9b-27ec-43ce-9057-8997d2be8da7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4t8f9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-423356-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-fkd5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-423356-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ha-423356-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-sr4gd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-423356-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-vip-ha-423356-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-423356-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	
	
	Name:               ha-423356-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:07:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:09:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-423356-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f1d7a6307945baa5aa5c71ec020b88
	  System UUID:                22f1d7a6-3079-45ba-a5aa-5c71ec020b88
	  Boot ID:                    d9c8dea3-edf9-4bd2-bec6-870cc3e73878
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wj85m       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-7x69m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x3 over 2m54s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x3 over 2m54s)  kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x3 over 2m54s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-423356-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr19 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051835] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040743] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.581158] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.878121] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.106083] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.064815] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057768] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.176439] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.158804] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285884] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.425458] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.068806] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.328517] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.914724] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.592182] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.083040] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.869346] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:04] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d] <==
	{"level":"warn","ts":"2024-04-19T20:10:08.483759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.536788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.549116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.557294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.562679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.577663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.58656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.600862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.605146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.6079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.615328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.621961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.632432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.635973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.6372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.639541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.650154Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.657558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.664191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.667584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.670876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.680638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.687697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.693822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:10:08.737366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:10:08 up 7 min,  0 users,  load average: 0.16, 0.15, 0.08
	Linux ha-423356 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f] <==
	I0419 20:09:30.062524       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:09:40.074340       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:09:40.074396       1 main.go:227] handling current node
	I0419 20:09:40.074412       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:09:40.074421       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:09:40.074585       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:09:40.074625       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:09:40.074766       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:09:40.074800       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:09:50.087693       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:09:50.087758       1 main.go:227] handling current node
	I0419 20:09:50.087781       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:09:50.087792       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:09:50.087968       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:09:50.088008       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:09:50.088153       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:09:50.088191       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:10:00.100469       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:10:00.100516       1 main.go:227] handling current node
	I0419 20:10:00.100535       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:10:00.100542       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:10:00.100666       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:10:00.100696       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:10:00.100749       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:10:00.100776       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6] <==
	I0419 20:03:44.731656       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:03:44.751469       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0419 20:03:44.766346       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:03:57.422370       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0419 20:03:57.823882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0419 20:04:51.720594       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0419 20:04:51.720661       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0419 20:04:51.720608       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 7.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0419 20:04:51.721807       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0419 20:04:51.721970       1 timeout.go:142] post-timeout activity - time-elapsed: 1.482905ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0419 20:06:41.074036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55302: use of closed network connection
	E0419 20:06:41.302880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55322: use of closed network connection
	E0419 20:06:41.528513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55346: use of closed network connection
	E0419 20:06:41.980495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55378: use of closed network connection
	E0419 20:06:42.178624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55394: use of closed network connection
	E0419 20:06:42.395012       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55400: use of closed network connection
	E0419 20:06:42.603522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55414: use of closed network connection
	E0419 20:06:42.813573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55432: use of closed network connection
	E0419 20:06:43.133878       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55456: use of closed network connection
	E0419 20:06:43.362266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55468: use of closed network connection
	E0419 20:06:43.584573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55482: use of closed network connection
	E0419 20:06:43.819350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55502: use of closed network connection
	E0419 20:06:44.022444       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55520: use of closed network connection
	E0419 20:06:44.214707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55542: use of closed network connection
	W0419 20:07:53.556297       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.7]
	
	
	==> kube-controller-manager [e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad] <==
	I0419 20:04:50.893653       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:04:51.810679       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m02"
	I0419 20:06:11.318644       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423356-m03\" does not exist"
	I0419 20:06:11.337213       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:06:11.840978       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m03"
	I0419 20:06:36.312859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.726466ms"
	I0419 20:06:36.368249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.266133ms"
	I0419 20:06:36.368971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="154.345µs"
	I0419 20:06:36.535948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.442461ms"
	I0419 20:06:36.712001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.792385ms"
	E0419 20:06:36.712207       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0419 20:06:36.712826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.915µs"
	I0419 20:06:36.718154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="237.06µs"
	I0419 20:06:40.346315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.023017ms"
	I0419 20:06:40.346537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="167.442µs"
	I0419 20:06:40.534871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.413064ms"
	I0419 20:06:40.589349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.405859ms"
	I0419 20:06:40.589467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.471µs"
	I0419 20:07:15.307504       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423356-m04\" does not exist"
	I0419 20:07:15.352557       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m04" podCIDRs=["10.244.3.0/24"]
	I0419 20:07:16.871281       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m04"
	I0419 20:07:25.883416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423356-m04"
	I0419 20:08:26.913893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423356-m04"
	I0419 20:08:27.014609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.053893ms"
	I0419 20:08:27.015120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.024µs"
	
	
	==> kube-proxy [b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573] <==
	I0419 20:03:58.715526       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:03:58.723910       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0419 20:03:58.792221       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:03:58.792331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:03:58.792410       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:03:58.797371       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:03:58.797629       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:03:58.797669       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:03:58.798793       1 config.go:192] "Starting service config controller"
	I0419 20:03:58.798834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:03:58.798871       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:03:58.798876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:03:58.799665       1 config.go:319] "Starting node config controller"
	I0419 20:03:58.799731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:03:58.899283       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:03:58.899412       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:03:58.899868       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861] <==
	W0419 20:03:42.868111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:03:42.868254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:03:42.939698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 20:03:42.939757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 20:03:42.982791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:03:42.982846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 20:03:45.558861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0419 20:06:11.516884       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gzbf4\": pod kube-proxy-gzbf4 is already assigned to node \"ha-423356-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gzbf4" node="ha-423356-m03"
	E0419 20:06:11.517815       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c5jvm\": pod kindnet-c5jvm is already assigned to node \"ha-423356-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-c5jvm" node="ha-423356-m03"
	E0419 20:06:11.518908       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 650472a1-b2bf-4cc9-97ea-12ec043e8728(kube-system/kindnet-c5jvm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c5jvm"
	E0419 20:06:11.519146       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c5jvm\": pod kindnet-c5jvm is already assigned to node \"ha-423356-m03\"" pod="kube-system/kindnet-c5jvm"
	I0419 20:06:11.519210       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c5jvm" node="ha-423356-m03"
	E0419 20:06:11.518793       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5ebded0d-82e1-4df3-9eac-43f34b7b74db(kube-system/kube-proxy-gzbf4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gzbf4"
	E0419 20:06:11.520145       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gzbf4\": pod kube-proxy-gzbf4 is already assigned to node \"ha-423356-m03\"" pod="kube-system/kube-proxy-gzbf4"
	I0419 20:06:11.520170       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gzbf4" node="ha-423356-m03"
	E0419 20:06:36.281563       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fq5c2\": pod busybox-fc5497c4f-fq5c2 is already assigned to node \"ha-423356-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-fq5c2" node="ha-423356-m02"
	E0419 20:06:36.281696       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4cc0bdd1-d446-460a-a41f-fcd5ef8aa55b(default/busybox-fc5497c4f-fq5c2) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-fq5c2"
	E0419 20:06:36.282008       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fq5c2\": pod busybox-fc5497c4f-fq5c2 is already assigned to node \"ha-423356-m02\"" pod="default/busybox-fc5497c4f-fq5c2"
	I0419 20:06:36.282183       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fq5c2" node="ha-423356-m02"
	E0419 20:07:15.381407       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wj85m\": pod kindnet-wj85m is already assigned to node \"ha-423356-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wj85m" node="ha-423356-m04"
	E0419 20:07:15.381613       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wj85m\": pod kindnet-wj85m is already assigned to node \"ha-423356-m04\"" pod="kube-system/kindnet-wj85m"
	E0419 20:07:15.395423       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7x69m\": pod kube-proxy-7x69m is already assigned to node \"ha-423356-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7x69m" node="ha-423356-m04"
	E0419 20:07:15.395516       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b5bd3478-3c20-44bd-bb1a-26c616d96c19(kube-system/kube-proxy-7x69m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7x69m"
	E0419 20:07:15.395546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7x69m\": pod kube-proxy-7x69m is already assigned to node \"ha-423356-m04\"" pod="kube-system/kube-proxy-7x69m"
	I0419 20:07:15.395576       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7x69m" node="ha-423356-m04"
	
	
	==> kubelet <==
	Apr 19 20:06:36 ha-423356 kubelet[1380]: I0419 20:06:36.307593    1380 topology_manager.go:215] "Topology Admit Handler" podUID="a361495f-5d84-4133-b206-4a42fb8ba66d" podNamespace="default" podName="busybox-fc5497c4f-wqfc4"
	Apr 19 20:06:36 ha-423356 kubelet[1380]: W0419 20:06:36.318963    1380 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-423356" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-423356' and this object
	Apr 19 20:06:36 ha-423356 kubelet[1380]: E0419 20:06:36.319149    1380 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-423356" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-423356' and this object
	Apr 19 20:06:36 ha-423356 kubelet[1380]: I0419 20:06:36.400504    1380 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshzj\" (UniqueName: \"kubernetes.io/projected/a361495f-5d84-4133-b206-4a42fb8ba66d-kube-api-access-kshzj\") pod \"busybox-fc5497c4f-wqfc4\" (UID: \"a361495f-5d84-4133-b206-4a42fb8ba66d\") " pod="default/busybox-fc5497c4f-wqfc4"
	Apr 19 20:06:40 ha-423356 kubelet[1380]: I0419 20:06:40.489729    1380 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-wqfc4" podStartSLOduration=2.3725694920000002 podStartE2EDuration="4.489689309s" podCreationTimestamp="2024-04-19 20:06:36 +0000 UTC" firstStartedPulling="2024-04-19 20:06:37.387595861 +0000 UTC m=+172.861782064" lastFinishedPulling="2024-04-19 20:06:39.504715678 +0000 UTC m=+174.978901881" observedRunningTime="2024-04-19 20:06:40.48940289 +0000 UTC m=+175.963589113" watchObservedRunningTime="2024-04-19 20:06:40.489689309 +0000 UTC m=+175.963875528"
	Apr 19 20:06:44 ha-423356 kubelet[1380]: E0419 20:06:44.674926    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:06:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:07:44 ha-423356 kubelet[1380]: E0419 20:07:44.669196    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:07:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:08:44 ha-423356 kubelet[1380]: E0419 20:08:44.674189    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:08:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:09:44 ha-423356 kubelet[1380]: E0419 20:09:44.669451    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:09:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-423356 -n ha-423356
helpers_test.go:261: (dbg) Run:  kubectl --context ha-423356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (3.177339058s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:13.412861  393624 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:13.412972  393624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:13.412980  393624 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:13.412985  393624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:13.413185  393624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:13.413361  393624 out.go:298] Setting JSON to false
	I0419 20:10:13.413392  393624 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:13.413500  393624 notify.go:220] Checking for updates...
	I0419 20:10:13.413784  393624 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:13.413800  393624 status.go:255] checking status of ha-423356 ...
	I0419 20:10:13.414149  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.414206  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.430158  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0419 20:10:13.430615  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.431275  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.431297  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.431700  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.431949  393624 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:13.433942  393624 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:13.433969  393624 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:13.434295  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.434341  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.450280  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0419 20:10:13.450762  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.451267  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.451290  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.451613  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.451838  393624 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:13.454964  393624 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:13.455403  393624 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:13.455441  393624 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:13.455523  393624 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:13.455856  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.455915  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.470909  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0419 20:10:13.471297  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.471790  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.471811  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.472135  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.472374  393624 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:13.472607  393624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:13.472669  393624 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:13.475515  393624 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:13.475970  393624 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:13.476006  393624 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:13.476152  393624 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:13.476458  393624 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:13.476606  393624 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:13.476762  393624 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:13.558292  393624 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:13.564587  393624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:13.588803  393624 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:13.588830  393624 api_server.go:166] Checking apiserver status ...
	I0419 20:10:13.588867  393624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:13.605282  393624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:13.614899  393624 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:13.614948  393624 ssh_runner.go:195] Run: ls
	I0419 20:10:13.620481  393624 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:13.627596  393624 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:13.627626  393624 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:13.627642  393624 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:13.627671  393624 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:13.627989  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.628026  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.642780  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0419 20:10:13.643284  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.643821  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.643853  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.644188  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.644398  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:13.646259  393624 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:10:13.646281  393624 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:13.646570  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.646610  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.661047  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0419 20:10:13.661490  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.661934  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.661959  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.662282  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.662505  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:10:13.665525  393624 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:13.665900  393624 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:13.665922  393624 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:13.666060  393624 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:13.666354  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:13.666389  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:13.680486  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I0419 20:10:13.680982  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:13.681538  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:13.681561  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:13.681862  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:13.682065  393624 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:10:13.682267  393624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:13.682304  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:10:13.684906  393624 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:13.685318  393624 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:13.685341  393624 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:13.685474  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:10:13.685663  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:10:13.685823  393624 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:10:13.685951  393624 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:16.164897  393624 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:16.165016  393624 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:16.165050  393624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:16.165062  393624 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:16.165087  393624 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:16.165102  393624 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:16.165451  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.165514  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.182669  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0419 20:10:16.183160  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.183662  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.183684  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.184010  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.184256  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:16.185758  393624 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:16.185777  393624 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:16.186100  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.186146  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.200838  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0419 20:10:16.201296  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.201785  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.201805  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.202123  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.202322  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:16.204987  393624 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:16.205459  393624 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:16.205489  393624 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:16.205616  393624 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:16.206017  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.206065  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.221037  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I0419 20:10:16.221473  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.221985  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.222013  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.222347  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.222548  393624 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:16.222762  393624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:16.222784  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:16.224950  393624 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:16.225439  393624 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:16.225469  393624 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:16.225557  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:16.225755  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:16.225954  393624 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:16.226106  393624 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:16.320213  393624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:16.336701  393624 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:16.336733  393624 api_server.go:166] Checking apiserver status ...
	I0419 20:10:16.336766  393624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:16.352755  393624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:16.364187  393624 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:16.364238  393624 ssh_runner.go:195] Run: ls
	I0419 20:10:16.369488  393624 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:16.374771  393624 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:16.374799  393624 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:16.374809  393624 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:16.374827  393624 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:16.375104  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.375134  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.390285  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0419 20:10:16.390692  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.391258  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.391280  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.391681  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.391890  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:16.393242  393624 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:16.393266  393624 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:16.393555  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.393593  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.408802  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0419 20:10:16.409253  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.409713  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.409737  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.410104  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.410263  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:16.413098  393624 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:16.413575  393624 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:16.413601  393624 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:16.413755  393624 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:16.414133  393624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:16.414176  393624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:16.428748  393624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I0419 20:10:16.429181  393624 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:16.429676  393624 main.go:141] libmachine: Using API Version  1
	I0419 20:10:16.429698  393624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:16.430054  393624 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:16.430264  393624 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:16.430487  393624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:16.430509  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:16.433428  393624 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:16.433884  393624 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:16.433916  393624 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:16.433995  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:16.434168  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:16.434377  393624 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:16.434539  393624 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:16.516250  393624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:16.530950  393624 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (5.116826141s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:17.613834  393724 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:17.613955  393724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:17.613963  393724 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:17.613967  393724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:17.614164  393724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:17.614347  393724 out.go:298] Setting JSON to false
	I0419 20:10:17.614374  393724 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:17.614505  393724 notify.go:220] Checking for updates...
	I0419 20:10:17.614810  393724 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:17.614830  393724 status.go:255] checking status of ha-423356 ...
	I0419 20:10:17.615267  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.615336  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.631564  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I0419 20:10:17.631990  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.632698  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.632730  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.633105  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.633342  393724 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:17.635035  393724 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:17.635063  393724 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:17.635339  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.635380  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.651383  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0419 20:10:17.651804  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.652265  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.652285  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.652605  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.652808  393724 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:17.655616  393724 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:17.656113  393724 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:17.656151  393724 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:17.656295  393724 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:17.656595  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.656655  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.671519  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37861
	I0419 20:10:17.671960  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.672446  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.672486  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.672830  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.673014  393724 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:17.673227  393724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:17.673266  393724 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:17.675665  393724 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:17.676057  393724 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:17.676088  393724 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:17.676169  393724 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:17.676337  393724 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:17.676503  393724 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:17.676682  393724 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:17.757478  393724 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:17.763764  393724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:17.780467  393724 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:17.780492  393724 api_server.go:166] Checking apiserver status ...
	I0419 20:10:17.780533  393724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:17.796927  393724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:17.806754  393724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:17.806829  393724 ssh_runner.go:195] Run: ls
	I0419 20:10:17.811321  393724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:17.817850  393724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:17.817884  393724 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:17.817899  393724 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:17.817925  393724 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:17.818391  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.818445  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.833660  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0419 20:10:17.834071  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.834612  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.834641  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.834963  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.835213  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:17.836888  393724 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:10:17.836906  393724 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:17.837221  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.837265  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.853580  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39999
	I0419 20:10:17.854094  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.854630  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.854658  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.855021  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.855229  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:10:17.858254  393724 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:17.858812  393724 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:17.858860  393724 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:17.859005  393724 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:17.859311  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:17.859358  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:17.874794  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I0419 20:10:17.875276  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:17.875728  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:17.875749  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:17.876052  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:17.876263  393724 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:10:17.876494  393724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:17.876525  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:10:17.879304  393724 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:17.879702  393724 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:17.879730  393724 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:17.879881  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:10:17.880064  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:10:17.880215  393724 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:10:17.880359  393724 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:19.236938  393724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:19.237002  393724 retry.go:31] will retry after 271.087613ms: dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:22.313057  393724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:22.313180  393724 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:22.313197  393724 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:22.313204  393724 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:22.313230  393724 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:22.313240  393724 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:22.313573  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.313626  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.328839  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0419 20:10:22.329266  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.329739  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.329767  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.330090  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.330377  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:22.331968  393724 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:22.331985  393724 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:22.332274  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.332324  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.346766  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0419 20:10:22.347174  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.347667  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.347697  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.348003  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.348215  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:22.351105  393724 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:22.351656  393724 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:22.351689  393724 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:22.351824  393724 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:22.352144  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.352188  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.366629  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0419 20:10:22.367085  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.367558  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.367578  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.367968  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.368224  393724 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:22.368403  393724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:22.368427  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:22.370988  393724 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:22.371478  393724 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:22.371510  393724 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:22.371690  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:22.371877  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:22.372038  393724 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:22.372170  393724 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:22.458186  393724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:22.473087  393724 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:22.473118  393724 api_server.go:166] Checking apiserver status ...
	I0419 20:10:22.473149  393724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:22.486683  393724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:22.497059  393724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:22.497126  393724 ssh_runner.go:195] Run: ls
	I0419 20:10:22.502283  393724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:22.506835  393724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:22.506869  393724 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:22.506879  393724 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:22.506895  393724 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:22.507179  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.507223  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.522509  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
	I0419 20:10:22.523046  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.523612  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.523644  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.523984  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.524196  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:22.525801  393724 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:22.525821  393724 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:22.526230  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.526269  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.541875  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0419 20:10:22.542376  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.542928  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.542950  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.543315  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.543482  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:22.546113  393724 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:22.546534  393724 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:22.546580  393724 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:22.546694  393724 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:22.546998  393724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:22.547034  393724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:22.562600  393724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0419 20:10:22.562965  393724 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:22.563418  393724 main.go:141] libmachine: Using API Version  1
	I0419 20:10:22.563439  393724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:22.563760  393724 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:22.563938  393724 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:22.564131  393724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:22.564151  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:22.567050  393724 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:22.567420  393724 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:22.567447  393724 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:22.567590  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:22.567764  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:22.567934  393724 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:22.568082  393724 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:22.652483  393724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:22.668743  393724 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (4.852080262s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:24.221781  393825 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:24.221901  393825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:24.221912  393825 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:24.221918  393825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:24.222128  393825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:24.222289  393825 out.go:298] Setting JSON to false
	I0419 20:10:24.222312  393825 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:24.222364  393825 notify.go:220] Checking for updates...
	I0419 20:10:24.223985  393825 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:24.224026  393825 status.go:255] checking status of ha-423356 ...
	I0419 20:10:24.224668  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.224720  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.239740  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0419 20:10:24.240257  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.240789  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.240820  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.241228  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.241460  393825 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:24.243135  393825 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:24.243163  393825 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:24.243514  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.243564  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.258484  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0419 20:10:24.258963  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.259545  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.259568  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.259872  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.260045  393825 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:24.262551  393825 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:24.262924  393825 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:24.262961  393825 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:24.263108  393825 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:24.263397  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.263437  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.278653  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0419 20:10:24.279083  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.279609  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.279641  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.279953  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.280250  393825 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:24.280527  393825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:24.280574  393825 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:24.283768  393825 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:24.284192  393825 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:24.284236  393825 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:24.284313  393825 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:24.284499  393825 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:24.284718  393825 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:24.284841  393825 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:24.368468  393825 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:24.374701  393825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:24.391591  393825 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:24.391626  393825 api_server.go:166] Checking apiserver status ...
	I0419 20:10:24.391671  393825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:24.409325  393825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:24.420331  393825 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:24.420388  393825 ssh_runner.go:195] Run: ls
	I0419 20:10:24.426117  393825 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:24.430562  393825 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:24.430589  393825 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:24.430599  393825 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:24.430619  393825 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:24.430941  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.430987  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.447440  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0419 20:10:24.447897  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.448341  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.448363  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.448718  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.448958  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:24.450445  393825 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:10:24.450464  393825 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:24.450766  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.450798  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.467965  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37961
	I0419 20:10:24.468456  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.468969  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.469011  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.469331  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.469518  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:10:24.472348  393825 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:24.472766  393825 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:24.472795  393825 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:24.472971  393825 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:24.473331  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:24.473382  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:24.488146  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0419 20:10:24.488697  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:24.489163  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:24.489190  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:24.489490  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:24.489675  393825 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:10:24.489888  393825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:24.489907  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:10:24.492322  393825 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:24.492793  393825 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:24.492827  393825 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:24.492951  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:10:24.493113  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:10:24.493260  393825 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:10:24.493406  393825 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:25.380854  393825 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:25.380908  393825 retry.go:31] will retry after 189.556015ms: dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:28.644882  393825 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:28.645000  393825 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:28.645020  393825 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:28.645027  393825 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:28.645051  393825 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:28.645060  393825 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:28.645390  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.645445  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.660742  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0419 20:10:28.661232  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.661703  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.661728  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.662029  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.662247  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:28.663727  393825 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:28.663744  393825 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:28.664039  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.664107  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.678737  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0419 20:10:28.679246  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.679838  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.679869  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.680179  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.680382  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:28.682646  393825 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:28.683167  393825 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:28.683202  393825 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:28.683362  393825 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:28.683830  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.683878  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.698533  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0419 20:10:28.698976  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.699546  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.699579  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.699935  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.700177  393825 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:28.700394  393825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:28.700419  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:28.703087  393825 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:28.703470  393825 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:28.703513  393825 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:28.703616  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:28.703773  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:28.703924  393825 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:28.704027  393825 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:28.791181  393825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:28.808915  393825 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:28.808949  393825 api_server.go:166] Checking apiserver status ...
	I0419 20:10:28.808991  393825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:28.826359  393825 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:28.839555  393825 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:28.839611  393825 ssh_runner.go:195] Run: ls
	I0419 20:10:28.845674  393825 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:28.849936  393825 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:28.849961  393825 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:28.849973  393825 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:28.849994  393825 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:28.850380  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.850427  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.866226  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0419 20:10:28.866718  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.867246  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.867268  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.867554  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.867768  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:28.869320  393825 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:28.869342  393825 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:28.869659  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.869703  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.885956  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0419 20:10:28.886408  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.886882  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.886905  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.887237  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.887443  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:28.890359  393825 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:28.890838  393825 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:28.890870  393825 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:28.891068  393825 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:28.891449  393825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:28.891494  393825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:28.906678  393825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0419 20:10:28.907061  393825 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:28.907454  393825 main.go:141] libmachine: Using API Version  1
	I0419 20:10:28.907476  393825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:28.907753  393825 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:28.907967  393825 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:28.908160  393825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:28.908185  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:28.910493  393825 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:28.910902  393825 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:28.910931  393825 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:28.911081  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:28.911279  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:28.911448  393825 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:28.911604  393825 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:28.996736  393825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:29.010851  393825 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (4.239927152s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:31.345093  393941 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:31.345266  393941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:31.345292  393941 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:31.345301  393941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:31.345481  393941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:31.345669  393941 out.go:298] Setting JSON to false
	I0419 20:10:31.345705  393941 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:31.345865  393941 notify.go:220] Checking for updates...
	I0419 20:10:31.346149  393941 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:31.346168  393941 status.go:255] checking status of ha-423356 ...
	I0419 20:10:31.346663  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.346718  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.369601  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
	I0419 20:10:31.370235  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.370984  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.371037  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.371508  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.371770  393941 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:31.373619  393941 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:31.373645  393941 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:31.373987  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.374065  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.389628  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0419 20:10:31.390036  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.390588  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.390612  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.391049  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.391271  393941 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:31.394163  393941 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:31.394635  393941 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:31.394663  393941 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:31.394814  393941 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:31.395100  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.395135  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.411066  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I0419 20:10:31.411468  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.411924  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.411946  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.412387  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.412596  393941 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:31.412865  393941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:31.412897  393941 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:31.415695  393941 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:31.416150  393941 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:31.416180  393941 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:31.416315  393941 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:31.416515  393941 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:31.416673  393941 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:31.416844  393941 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:31.496656  393941 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:31.503284  393941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:31.518527  393941 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:31.518562  393941 api_server.go:166] Checking apiserver status ...
	I0419 20:10:31.518613  393941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:31.532610  393941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:31.542791  393941 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:31.542858  393941 ssh_runner.go:195] Run: ls
	I0419 20:10:31.547312  393941 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:31.551700  393941 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:31.551722  393941 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:31.551733  393941 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:31.551758  393941 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:31.552050  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.552084  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.567370  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0419 20:10:31.567762  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.568214  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.568235  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.568573  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.568827  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:31.570367  393941 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:10:31.570386  393941 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:31.570686  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.570735  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.585908  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0419 20:10:31.586366  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.586958  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.586992  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.587324  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.587528  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:10:31.590388  393941 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:31.590797  393941 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:31.590825  393941 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:31.591014  393941 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:31.591507  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:31.591566  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:31.606444  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0419 20:10:31.606856  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:31.607318  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:31.607338  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:31.607681  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:31.607883  393941 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:10:31.608085  393941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:31.608112  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:10:31.610759  393941 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:31.611235  393941 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:31.611264  393941 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:31.611341  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:10:31.611522  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:10:31.611666  393941 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:10:31.611793  393941 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:31.716896  393941 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:31.716944  393941 retry.go:31] will retry after 369.340432ms: dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:35.144906  393941 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:35.145019  393941 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:35.145051  393941 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:35.145062  393941 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:35.145108  393941 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:35.145118  393941 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:35.145471  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.145519  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.161342  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0419 20:10:35.161834  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.162352  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.162375  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.162740  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.163013  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:35.164849  393941 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:35.164868  393941 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:35.165196  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.165265  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.181031  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39473
	I0419 20:10:35.181667  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.182297  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.182362  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.182783  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.183039  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:35.186150  393941 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:35.186566  393941 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:35.186603  393941 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:35.186718  393941 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:35.187013  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.187052  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.205565  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I0419 20:10:35.206073  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.206657  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.206683  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.207043  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.207227  393941 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:35.207485  393941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:35.207513  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:35.210662  393941 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:35.211160  393941 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:35.211190  393941 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:35.211362  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:35.211575  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:35.211762  393941 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:35.211933  393941 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:35.306164  393941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:35.325097  393941 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:35.325129  393941 api_server.go:166] Checking apiserver status ...
	I0419 20:10:35.325164  393941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:35.340050  393941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:35.350981  393941 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:35.351042  393941 ssh_runner.go:195] Run: ls
	I0419 20:10:35.355911  393941 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:35.360574  393941 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:35.360600  393941 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:35.360608  393941 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:35.360626  393941 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:35.360978  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.361025  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.376791  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0419 20:10:35.377263  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.377749  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.377774  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.378129  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.378339  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:35.380176  393941 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:35.380196  393941 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:35.380626  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.380687  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.397147  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I0419 20:10:35.397730  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.398249  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.398277  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.398626  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.398834  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:35.402197  393941 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:35.402787  393941 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:35.402826  393941 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:35.403041  393941 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:35.403484  393941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:35.403524  393941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:35.419590  393941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37985
	I0419 20:10:35.420087  393941 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:35.420539  393941 main.go:141] libmachine: Using API Version  1
	I0419 20:10:35.420560  393941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:35.420920  393941 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:35.421106  393941 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:35.421255  393941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:35.421281  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:35.423891  393941 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:35.424314  393941 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:35.424346  393941 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:35.424569  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:35.424749  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:35.424908  393941 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:35.425027  393941 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:35.509030  393941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:35.524169  393941 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (3.776000634s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:40.063213  394043 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:40.063346  394043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:40.063354  394043 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:40.063358  394043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:40.063577  394043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:40.063760  394043 out.go:298] Setting JSON to false
	I0419 20:10:40.063800  394043 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:40.063937  394043 notify.go:220] Checking for updates...
	I0419 20:10:40.064265  394043 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:40.064290  394043 status.go:255] checking status of ha-423356 ...
	I0419 20:10:40.064798  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.064881  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.083453  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0419 20:10:40.084089  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.084685  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.084710  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.085047  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.085248  394043 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:40.086808  394043 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:40.086834  394043 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:40.087146  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.087186  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.103443  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I0419 20:10:40.103835  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.104273  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.104303  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.104671  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.104886  394043 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:40.107722  394043 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:40.108163  394043 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:40.108193  394043 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:40.108319  394043 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:40.108665  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.108715  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.124050  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0419 20:10:40.124498  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.125002  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.125029  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.125334  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.125600  394043 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:40.125806  394043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:40.125852  394043 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:40.128336  394043 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:40.128755  394043 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:40.128781  394043 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:40.128933  394043 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:40.129108  394043 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:40.129239  394043 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:40.129382  394043 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:40.212388  394043 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:40.219578  394043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:40.235695  394043 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:40.235727  394043 api_server.go:166] Checking apiserver status ...
	I0419 20:10:40.235767  394043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:40.252119  394043 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:40.266143  394043 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:40.266242  394043 ssh_runner.go:195] Run: ls
	I0419 20:10:40.272137  394043 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:40.280686  394043 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:40.280721  394043 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:40.280736  394043 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:40.280757  394043 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:40.281136  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.281174  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.296686  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0419 20:10:40.297145  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.297671  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.297692  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.298039  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.298235  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:40.299888  394043 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:10:40.299909  394043 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:40.300211  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.300253  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.316553  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0419 20:10:40.317021  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.317517  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.317549  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.317859  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.318049  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:10:40.320668  394043 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:40.321166  394043 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:40.321196  394043 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:40.321361  394043 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:10:40.321673  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:40.321710  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:40.337012  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0419 20:10:40.337480  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:40.337957  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:40.337982  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:40.338317  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:40.338512  394043 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:10:40.338704  394043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:40.338728  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:10:40.341704  394043 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:40.342169  394043 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:10:40.342196  394043 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:10:40.342319  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:10:40.342521  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:10:40.342651  394043 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:10:40.342761  394043 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	W0419 20:10:43.396878  394043 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0419 20:10:43.396989  394043 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0419 20:10:43.397013  394043 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:43.397026  394043 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:10:43.397042  394043 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0419 20:10:43.397050  394043 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:43.397404  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.397490  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.413684  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I0419 20:10:43.414107  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.414599  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.414627  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.414937  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.415156  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:43.416757  394043 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:43.416775  394043 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:43.417089  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.417135  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.432153  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0419 20:10:43.432585  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.433164  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.433191  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.433520  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.433715  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:43.437038  394043 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:43.437468  394043 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:43.437500  394043 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:43.437667  394043 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:43.438040  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.438095  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.456187  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0419 20:10:43.456749  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.457291  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.457321  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.457678  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.457914  394043 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:43.458155  394043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:43.458188  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:43.461059  394043 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:43.461575  394043 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:43.461647  394043 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:43.461770  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:43.461974  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:43.462186  394043 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:43.462380  394043 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:43.554653  394043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:43.570330  394043 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:43.570377  394043 api_server.go:166] Checking apiserver status ...
	I0419 20:10:43.570415  394043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:43.589160  394043 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:43.601549  394043 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:43.601623  394043 ssh_runner.go:195] Run: ls
	I0419 20:10:43.606589  394043 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:43.613214  394043 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:43.613247  394043 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:43.613259  394043 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:43.613277  394043 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:43.613677  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.613744  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.629164  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0419 20:10:43.629596  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.630104  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.630127  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.630453  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.630637  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:43.632485  394043 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:43.632503  394043 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:43.632941  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.633001  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.650040  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0419 20:10:43.650522  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.651060  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.651089  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.651492  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.651685  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:43.654404  394043 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:43.654853  394043 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:43.654886  394043 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:43.654980  394043 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:43.655297  394043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:43.655337  394043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:43.672459  394043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0419 20:10:43.672942  394043 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:43.673404  394043 main.go:141] libmachine: Using API Version  1
	I0419 20:10:43.673428  394043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:43.673763  394043 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:43.673975  394043 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:43.674149  394043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:43.674170  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:43.677265  394043 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:43.677806  394043 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:43.677832  394043 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:43.678004  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:43.678202  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:43.678352  394043 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:43.678483  394043 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:43.764492  394043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:43.779535  394043 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 7 (664.482078ms)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:10:49.604271  394175 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:10:49.604595  394175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:49.604607  394175 out.go:304] Setting ErrFile to fd 2...
	I0419 20:10:49.604611  394175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:10:49.604831  394175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:10:49.605059  394175 out.go:298] Setting JSON to false
	I0419 20:10:49.605090  394175 mustload.go:65] Loading cluster: ha-423356
	I0419 20:10:49.605216  394175 notify.go:220] Checking for updates...
	I0419 20:10:49.605568  394175 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:10:49.605593  394175 status.go:255] checking status of ha-423356 ...
	I0419 20:10:49.606088  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.606167  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.628150  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0419 20:10:49.628670  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.629315  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.629337  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.629760  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.629957  394175 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:10:49.631614  394175 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:10:49.631638  394175 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:49.631923  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.631973  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.647570  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0419 20:10:49.648040  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.648530  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.648552  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.648875  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.649047  394175 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:10:49.652130  394175 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:49.652527  394175 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:49.652556  394175 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:49.652714  394175 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:10:49.653159  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.653208  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.668302  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0419 20:10:49.668740  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.669290  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.669315  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.669632  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.669831  394175 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:10:49.669994  394175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:49.670018  394175 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:10:49.672890  394175 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:49.673342  394175 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:10:49.673362  394175 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:10:49.673525  394175 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:10:49.673701  394175 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:10:49.673857  394175 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:10:49.673997  394175 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:10:49.759787  394175 ssh_runner.go:195] Run: systemctl --version
	I0419 20:10:49.766881  394175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:49.782023  394175 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:49.782052  394175 api_server.go:166] Checking apiserver status ...
	I0419 20:10:49.782084  394175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:49.796974  394175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:10:49.807030  394175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:49.807100  394175 ssh_runner.go:195] Run: ls
	I0419 20:10:49.812160  394175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:49.816687  394175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:49.816721  394175 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:10:49.816736  394175 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:49.816753  394175 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:10:49.817051  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.817107  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.832863  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0419 20:10:49.833354  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.833909  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.833936  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.834241  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.834464  394175 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:10:49.843508  394175 status.go:330] ha-423356-m02 host status = "Stopped" (err=<nil>)
	I0419 20:10:49.843530  394175 status.go:343] host is not running, skipping remaining checks
	I0419 20:10:49.843537  394175 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:49.843555  394175 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:10:49.843846  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.843884  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.858870  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0419 20:10:49.859340  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.859826  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.859850  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.860196  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.860378  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:10:49.862056  394175 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:10:49.862075  394175 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:49.862364  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.862399  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.877548  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0419 20:10:49.877987  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.878489  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.878504  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.878843  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.879019  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:10:49.881975  394175 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:49.882426  394175 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:49.882459  394175 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:49.882618  394175 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:10:49.882908  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:49.882948  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:49.897660  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0419 20:10:49.898150  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:49.898597  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:49.898620  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:49.898937  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:49.899102  394175 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:10:49.899265  394175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:49.899291  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:10:49.901933  394175 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:49.902358  394175 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:10:49.902384  394175 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:10:49.902472  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:10:49.902623  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:10:49.902782  394175 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:10:49.902997  394175 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:10:49.989121  394175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:50.006519  394175 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:10:50.006567  394175 api_server.go:166] Checking apiserver status ...
	I0419 20:10:50.006625  394175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:10:50.021450  394175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:10:50.031994  394175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:10:50.032067  394175 ssh_runner.go:195] Run: ls
	I0419 20:10:50.036827  394175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:10:50.041120  394175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:10:50.041146  394175 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:10:50.041155  394175 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:10:50.041171  394175 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:10:50.041500  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:50.041544  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:50.057004  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0419 20:10:50.057500  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:50.058089  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:50.058119  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:50.058496  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:50.058734  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:10:50.060325  394175 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:10:50.060341  394175 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:50.060613  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:50.060667  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:50.076311  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0419 20:10:50.076804  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:50.077293  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:50.077314  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:50.077674  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:50.077858  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:10:50.080776  394175 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:50.081212  394175 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:50.081237  394175 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:50.081346  394175 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:10:50.081668  394175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:10:50.081710  394175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:10:50.097738  394175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0419 20:10:50.098130  394175 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:10:50.098572  394175 main.go:141] libmachine: Using API Version  1
	I0419 20:10:50.098590  394175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:10:50.098932  394175 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:10:50.099158  394175 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:10:50.099395  394175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:10:50.099424  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:10:50.102419  394175 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:50.102812  394175 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:10:50.102843  394175 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:10:50.102985  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:10:50.103194  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:10:50.103368  394175 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:10:50.103522  394175 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:10:50.193469  394175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:10:50.208566  394175 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 7 (657.404403ms)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:11:00.028412  394285 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:11:00.028521  394285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:00.028531  394285 out.go:304] Setting ErrFile to fd 2...
	I0419 20:11:00.028537  394285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:00.028740  394285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:11:00.028939  394285 out.go:298] Setting JSON to false
	I0419 20:11:00.028972  394285 mustload.go:65] Loading cluster: ha-423356
	I0419 20:11:00.029031  394285 notify.go:220] Checking for updates...
	I0419 20:11:00.029551  394285 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:11:00.029575  394285 status.go:255] checking status of ha-423356 ...
	I0419 20:11:00.030016  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.030086  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.049215  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0419 20:11:00.049694  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.050272  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.050301  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.050723  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.051040  394285 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:11:00.052732  394285 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:11:00.052753  394285 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:11:00.053096  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.053136  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.068947  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0419 20:11:00.069373  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.069945  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.069970  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.070284  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.070506  394285 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:11:00.073274  394285 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:00.073668  394285 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:11:00.073695  394285 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:00.073837  394285 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:11:00.074145  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.074193  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.090122  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0419 20:11:00.090588  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.091132  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.091158  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.091536  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.091765  394285 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:11:00.091983  394285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:00.092028  394285 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:11:00.094682  394285 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:00.095078  394285 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:11:00.095099  394285 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:00.095347  394285 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:11:00.095542  394285 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:11:00.095684  394285 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:11:00.095849  394285 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:11:00.186532  394285 ssh_runner.go:195] Run: systemctl --version
	I0419 20:11:00.197934  394285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:00.216842  394285 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:11:00.216872  394285 api_server.go:166] Checking apiserver status ...
	I0419 20:11:00.216907  394285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:11:00.231803  394285 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:11:00.242393  394285 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:11:00.242455  394285 ssh_runner.go:195] Run: ls
	I0419 20:11:00.248532  394285 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:11:00.254802  394285 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:11:00.254826  394285 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:11:00.254837  394285 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:00.254861  394285 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:11:00.255170  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.255230  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.270113  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0419 20:11:00.270569  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.271081  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.271101  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.271399  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.271576  394285 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:11:00.273392  394285 status.go:330] ha-423356-m02 host status = "Stopped" (err=<nil>)
	I0419 20:11:00.273417  394285 status.go:343] host is not running, skipping remaining checks
	I0419 20:11:00.273426  394285 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:00.273452  394285 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:11:00.273719  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.273762  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.288400  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0419 20:11:00.288834  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.289342  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.289366  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.289696  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.289880  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:11:00.291396  394285 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:11:00.291419  394285 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:11:00.291752  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.291788  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.307304  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0419 20:11:00.307724  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.308204  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.308229  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.308625  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.308854  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:11:00.311789  394285 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:00.312239  394285 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:11:00.312270  394285 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:00.312349  394285 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:11:00.312807  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.312857  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.327824  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0419 20:11:00.328226  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.328744  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.328773  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.329089  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.329289  394285 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:11:00.329487  394285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:00.329516  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:11:00.331890  394285 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:00.332256  394285 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:11:00.332282  394285 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:00.332392  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:11:00.332550  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:11:00.332723  394285 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:11:00.332862  394285 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:11:00.417291  394285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:00.434139  394285 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:11:00.434188  394285 api_server.go:166] Checking apiserver status ...
	I0419 20:11:00.434244  394285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:11:00.448788  394285 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:11:00.459335  394285 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:11:00.459388  394285 ssh_runner.go:195] Run: ls
	I0419 20:11:00.463816  394285 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:11:00.468128  394285 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:11:00.468155  394285 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:11:00.468166  394285 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:00.468190  394285 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:11:00.468579  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.468647  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.483363  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36737
	I0419 20:11:00.483816  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.484322  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.484351  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.484752  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.484940  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:11:00.486596  394285 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:11:00.486617  394285 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:11:00.486962  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.487010  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.504192  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0419 20:11:00.504728  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.505334  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.505364  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.505764  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.505987  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:11:00.508663  394285 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:00.509057  394285 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:11:00.509083  394285 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:00.509251  394285 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:11:00.509556  394285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:00.509600  394285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:00.524320  394285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0419 20:11:00.524865  394285 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:00.525385  394285 main.go:141] libmachine: Using API Version  1
	I0419 20:11:00.525415  394285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:00.525768  394285 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:00.525961  394285 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:11:00.526141  394285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:00.526166  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:11:00.528751  394285 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:00.529172  394285 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:11:00.529196  394285 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:00.529330  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:11:00.529545  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:11:00.529670  394285 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:11:00.529793  394285 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:11:00.611971  394285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:00.626679  394285 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 7 (662.475332ms)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423356-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:11:11.346808  394406 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:11:11.347100  394406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:11.347114  394406 out.go:304] Setting ErrFile to fd 2...
	I0419 20:11:11.347118  394406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:11.347326  394406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:11:11.347511  394406 out.go:298] Setting JSON to false
	I0419 20:11:11.347542  394406 mustload.go:65] Loading cluster: ha-423356
	I0419 20:11:11.347590  394406 notify.go:220] Checking for updates...
	I0419 20:11:11.348080  394406 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:11:11.348106  394406 status.go:255] checking status of ha-423356 ...
	I0419 20:11:11.348572  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.348626  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.368489  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0419 20:11:11.369032  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.369603  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.369626  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.370082  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.370342  394406 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:11:11.372122  394406 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:11:11.372152  394406 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:11:11.372581  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.372681  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.388276  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0419 20:11:11.388696  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.389185  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.389212  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.389533  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.389725  394406 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:11:11.392242  394406 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:11.392822  394406 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:11:11.392861  394406 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:11.393021  394406 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:11:11.393419  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.393479  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.409175  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0419 20:11:11.409647  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.410122  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.410143  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.410439  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.410634  394406 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:11:11.410801  394406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:11.410823  394406 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:11:11.413372  394406 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:11.413803  394406 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:11:11.413835  394406 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:11:11.413992  394406 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:11:11.414162  394406 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:11:11.414315  394406 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:11:11.414464  394406 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:11:11.497203  394406 ssh_runner.go:195] Run: systemctl --version
	I0419 20:11:11.503621  394406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:11.521138  394406 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:11:11.521167  394406 api_server.go:166] Checking apiserver status ...
	I0419 20:11:11.521201  394406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:11:11.536064  394406 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W0419 20:11:11.546790  394406 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:11:11.546869  394406 ssh_runner.go:195] Run: ls
	I0419 20:11:11.551988  394406 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:11:11.556242  394406 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:11:11.556271  394406 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:11:11.556282  394406 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:11.556304  394406 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:11:11.556714  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.556760  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.572881  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0419 20:11:11.573314  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.573822  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.573849  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.574217  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.574412  394406 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:11:11.575893  394406 status.go:330] ha-423356-m02 host status = "Stopped" (err=<nil>)
	I0419 20:11:11.575909  394406 status.go:343] host is not running, skipping remaining checks
	I0419 20:11:11.575929  394406 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:11.575962  394406 status.go:255] checking status of ha-423356-m03 ...
	I0419 20:11:11.576246  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.576291  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.590711  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0419 20:11:11.591163  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.591622  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.591638  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.591954  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.592126  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:11:11.593662  394406 status.go:330] ha-423356-m03 host status = "Running" (err=<nil>)
	I0419 20:11:11.593681  394406 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:11:11.594067  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.594113  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.609412  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0419 20:11:11.609953  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.610626  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.610657  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.611036  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.611263  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:11:11.614622  394406 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:11.615109  394406 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:11:11.615139  394406 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:11.615270  394406 host.go:66] Checking if "ha-423356-m03" exists ...
	I0419 20:11:11.615624  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.615664  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.630294  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0419 20:11:11.630769  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.631259  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.631283  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.631574  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.631743  394406 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:11:11.631911  394406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:11.631935  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:11:11.634610  394406 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:11.635026  394406 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:11:11.635061  394406 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:11.635213  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:11:11.635348  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:11:11.635504  394406 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:11:11.635640  394406 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:11:11.728613  394406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:11.748087  394406 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:11:11.748125  394406 api_server.go:166] Checking apiserver status ...
	I0419 20:11:11.748169  394406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:11:11.765473  394406 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0419 20:11:11.775844  394406 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:11:11.775943  394406 ssh_runner.go:195] Run: ls
	I0419 20:11:11.780481  394406 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:11:11.785765  394406 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:11:11.785787  394406 status.go:422] ha-423356-m03 apiserver status = Running (err=<nil>)
	I0419 20:11:11.785796  394406 status.go:257] ha-423356-m03 status: &{Name:ha-423356-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:11:11.785812  394406 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:11:11.786132  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.786175  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.802642  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0419 20:11:11.803064  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.803532  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.803559  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.803823  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.804019  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:11:11.805639  394406 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:11:11.805656  394406 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:11:11.805929  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.805969  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.821707  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I0419 20:11:11.822182  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.822583  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.822604  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.822958  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.823149  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:11:11.825657  394406 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:11.826110  394406 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:11:11.826140  394406 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:11.826298  394406 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:11:11.826597  394406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:11.826635  394406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:11.842298  394406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0419 20:11:11.842763  394406 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:11.843229  394406 main.go:141] libmachine: Using API Version  1
	I0419 20:11:11.843256  394406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:11.843589  394406 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:11.843755  394406 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:11:11.843943  394406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:11:11.843971  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:11:11.846459  394406 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:11.846856  394406 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:11:11.846889  394406 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:11.847058  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:11:11.847258  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:11:11.847414  394406 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:11:11.847529  394406 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:11:11.932364  394406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:11:11.946699  394406 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-423356 -n ha-423356
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-423356 logs -n 25: (1.479094802s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m03_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m04 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp testdata/cp-test.txt                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m04_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03:/home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m03 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-423356 node stop m02 -v=7                                                     | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-423356 node start m02 -v=7                                                    | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:03:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:03:02.845033  388805 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:03:02.845273  388805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:03:02.845282  388805 out.go:304] Setting ErrFile to fd 2...
	I0419 20:03:02.845286  388805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:03:02.845488  388805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:03:02.846074  388805 out.go:298] Setting JSON to false
	I0419 20:03:02.847027  388805 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6329,"bootTime":1713550654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:03:02.847103  388805 start.go:139] virtualization: kvm guest
	I0419 20:03:02.849294  388805 out.go:177] * [ha-423356] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:03:02.850788  388805 notify.go:220] Checking for updates...
	I0419 20:03:02.850799  388805 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:03:02.852415  388805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:03:02.854180  388805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:02.855527  388805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:02.856730  388805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:03:02.858102  388805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:03:02.859530  388805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:03:02.895033  388805 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 20:03:02.896430  388805 start.go:297] selected driver: kvm2
	I0419 20:03:02.896441  388805 start.go:901] validating driver "kvm2" against <nil>
	I0419 20:03:02.896454  388805 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:03:02.897175  388805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:03:02.897263  388805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:03:02.912832  388805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:03:02.912885  388805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 20:03:02.913116  388805 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:03:02.913190  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:02.913202  388805 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 20:03:02.913207  388805 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 20:03:02.913266  388805 start.go:340] cluster config:
	{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0419 20:03:02.913370  388805 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:03:02.915373  388805 out.go:177] * Starting "ha-423356" primary control-plane node in "ha-423356" cluster
	I0419 20:03:02.916990  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:02.917035  388805 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:03:02.917046  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:03:02.917164  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:03:02.917178  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:03:02.917469  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:02.917491  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json: {Name:mk412b5f97f86b0ffa73cd379f7e787167939ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:02.917655  388805 start.go:360] acquireMachinesLock for ha-423356: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:03:02.917692  388805 start.go:364] duration metric: took 18.288µs to acquireMachinesLock for "ha-423356"
	I0419 20:03:02.917717  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:02.917816  388805 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 20:03:02.919511  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:03:02.919654  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:02.919707  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:02.934351  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0419 20:03:02.934822  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:02.935463  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:02.935488  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:02.935946  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:02.936157  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:02.936332  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:02.936480  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:03:02.936505  388805 client.go:168] LocalClient.Create starting
	I0419 20:03:02.936531  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:03:02.936569  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:02.936587  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:02.936673  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:03:02.936699  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:02.936714  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:02.936737  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:03:02.936745  388805 main.go:141] libmachine: (ha-423356) Calling .PreCreateCheck
	I0419 20:03:02.937106  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:02.937505  388805 main.go:141] libmachine: Creating machine...
	I0419 20:03:02.937518  388805 main.go:141] libmachine: (ha-423356) Calling .Create
	I0419 20:03:02.937653  388805 main.go:141] libmachine: (ha-423356) Creating KVM machine...
	I0419 20:03:02.938938  388805 main.go:141] libmachine: (ha-423356) DBG | found existing default KVM network
	I0419 20:03:02.939688  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:02.939546  388829 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0419 20:03:02.939722  388805 main.go:141] libmachine: (ha-423356) DBG | created network xml: 
	I0419 20:03:02.939742  388805 main.go:141] libmachine: (ha-423356) DBG | <network>
	I0419 20:03:02.939824  388805 main.go:141] libmachine: (ha-423356) DBG |   <name>mk-ha-423356</name>
	I0419 20:03:02.939849  388805 main.go:141] libmachine: (ha-423356) DBG |   <dns enable='no'/>
	I0419 20:03:02.939860  388805 main.go:141] libmachine: (ha-423356) DBG |   
	I0419 20:03:02.939874  388805 main.go:141] libmachine: (ha-423356) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0419 20:03:02.939884  388805 main.go:141] libmachine: (ha-423356) DBG |     <dhcp>
	I0419 20:03:02.939894  388805 main.go:141] libmachine: (ha-423356) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0419 20:03:02.939913  388805 main.go:141] libmachine: (ha-423356) DBG |     </dhcp>
	I0419 20:03:02.939932  388805 main.go:141] libmachine: (ha-423356) DBG |   </ip>
	I0419 20:03:02.939945  388805 main.go:141] libmachine: (ha-423356) DBG |   
	I0419 20:03:02.939960  388805 main.go:141] libmachine: (ha-423356) DBG | </network>
	I0419 20:03:02.939994  388805 main.go:141] libmachine: (ha-423356) DBG | 
	I0419 20:03:02.945195  388805 main.go:141] libmachine: (ha-423356) DBG | trying to create private KVM network mk-ha-423356 192.168.39.0/24...
	I0419 20:03:03.017485  388805 main.go:141] libmachine: (ha-423356) DBG | private KVM network mk-ha-423356 192.168.39.0/24 created
	I0419 20:03:03.017520  388805 main.go:141] libmachine: (ha-423356) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 ...
	I0419 20:03:03.017531  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.017389  388829 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:03.017545  388805 main.go:141] libmachine: (ha-423356) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:03:03.017771  388805 main.go:141] libmachine: (ha-423356) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:03:03.264638  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.264500  388829 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa...
	I0419 20:03:03.381449  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.381305  388829 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/ha-423356.rawdisk...
	I0419 20:03:03.381484  388805 main.go:141] libmachine: (ha-423356) DBG | Writing magic tar header
	I0419 20:03:03.381503  388805 main.go:141] libmachine: (ha-423356) DBG | Writing SSH key tar header
	I0419 20:03:03.381515  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:03.381422  388829 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 ...
	I0419 20:03:03.381529  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356
	I0419 20:03:03.381595  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356 (perms=drwx------)
	I0419 20:03:03.381620  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:03:03.381636  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:03:03.381663  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:03:03.381670  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:03:03.381679  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:03.381689  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:03:03.381702  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:03:03.381711  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:03:03.381726  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:03:03.381732  388805 main.go:141] libmachine: (ha-423356) DBG | Checking permissions on dir: /home
	I0419 20:03:03.381740  388805 main.go:141] libmachine: (ha-423356) DBG | Skipping /home - not owner
	I0419 20:03:03.381748  388805 main.go:141] libmachine: (ha-423356) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:03:03.381755  388805 main.go:141] libmachine: (ha-423356) Creating domain...
	I0419 20:03:03.382893  388805 main.go:141] libmachine: (ha-423356) define libvirt domain using xml: 
	I0419 20:03:03.382916  388805 main.go:141] libmachine: (ha-423356) <domain type='kvm'>
	I0419 20:03:03.382923  388805 main.go:141] libmachine: (ha-423356)   <name>ha-423356</name>
	I0419 20:03:03.382929  388805 main.go:141] libmachine: (ha-423356)   <memory unit='MiB'>2200</memory>
	I0419 20:03:03.382934  388805 main.go:141] libmachine: (ha-423356)   <vcpu>2</vcpu>
	I0419 20:03:03.382938  388805 main.go:141] libmachine: (ha-423356)   <features>
	I0419 20:03:03.382943  388805 main.go:141] libmachine: (ha-423356)     <acpi/>
	I0419 20:03:03.382950  388805 main.go:141] libmachine: (ha-423356)     <apic/>
	I0419 20:03:03.382955  388805 main.go:141] libmachine: (ha-423356)     <pae/>
	I0419 20:03:03.382967  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.382975  388805 main.go:141] libmachine: (ha-423356)   </features>
	I0419 20:03:03.382980  388805 main.go:141] libmachine: (ha-423356)   <cpu mode='host-passthrough'>
	I0419 20:03:03.382987  388805 main.go:141] libmachine: (ha-423356)   
	I0419 20:03:03.382992  388805 main.go:141] libmachine: (ha-423356)   </cpu>
	I0419 20:03:03.382997  388805 main.go:141] libmachine: (ha-423356)   <os>
	I0419 20:03:03.383002  388805 main.go:141] libmachine: (ha-423356)     <type>hvm</type>
	I0419 20:03:03.383010  388805 main.go:141] libmachine: (ha-423356)     <boot dev='cdrom'/>
	I0419 20:03:03.383016  388805 main.go:141] libmachine: (ha-423356)     <boot dev='hd'/>
	I0419 20:03:03.383090  388805 main.go:141] libmachine: (ha-423356)     <bootmenu enable='no'/>
	I0419 20:03:03.383121  388805 main.go:141] libmachine: (ha-423356)   </os>
	I0419 20:03:03.383132  388805 main.go:141] libmachine: (ha-423356)   <devices>
	I0419 20:03:03.383143  388805 main.go:141] libmachine: (ha-423356)     <disk type='file' device='cdrom'>
	I0419 20:03:03.383160  388805 main.go:141] libmachine: (ha-423356)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/boot2docker.iso'/>
	I0419 20:03:03.383173  388805 main.go:141] libmachine: (ha-423356)       <target dev='hdc' bus='scsi'/>
	I0419 20:03:03.383184  388805 main.go:141] libmachine: (ha-423356)       <readonly/>
	I0419 20:03:03.383192  388805 main.go:141] libmachine: (ha-423356)     </disk>
	I0419 20:03:03.383210  388805 main.go:141] libmachine: (ha-423356)     <disk type='file' device='disk'>
	I0419 20:03:03.383231  388805 main.go:141] libmachine: (ha-423356)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:03:03.383248  388805 main.go:141] libmachine: (ha-423356)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/ha-423356.rawdisk'/>
	I0419 20:03:03.383259  388805 main.go:141] libmachine: (ha-423356)       <target dev='hda' bus='virtio'/>
	I0419 20:03:03.383270  388805 main.go:141] libmachine: (ha-423356)     </disk>
	I0419 20:03:03.383278  388805 main.go:141] libmachine: (ha-423356)     <interface type='network'>
	I0419 20:03:03.383285  388805 main.go:141] libmachine: (ha-423356)       <source network='mk-ha-423356'/>
	I0419 20:03:03.383296  388805 main.go:141] libmachine: (ha-423356)       <model type='virtio'/>
	I0419 20:03:03.383315  388805 main.go:141] libmachine: (ha-423356)     </interface>
	I0419 20:03:03.383333  388805 main.go:141] libmachine: (ha-423356)     <interface type='network'>
	I0419 20:03:03.383345  388805 main.go:141] libmachine: (ha-423356)       <source network='default'/>
	I0419 20:03:03.383350  388805 main.go:141] libmachine: (ha-423356)       <model type='virtio'/>
	I0419 20:03:03.383358  388805 main.go:141] libmachine: (ha-423356)     </interface>
	I0419 20:03:03.383364  388805 main.go:141] libmachine: (ha-423356)     <serial type='pty'>
	I0419 20:03:03.383371  388805 main.go:141] libmachine: (ha-423356)       <target port='0'/>
	I0419 20:03:03.383376  388805 main.go:141] libmachine: (ha-423356)     </serial>
	I0419 20:03:03.383383  388805 main.go:141] libmachine: (ha-423356)     <console type='pty'>
	I0419 20:03:03.383389  388805 main.go:141] libmachine: (ha-423356)       <target type='serial' port='0'/>
	I0419 20:03:03.383394  388805 main.go:141] libmachine: (ha-423356)     </console>
	I0419 20:03:03.383399  388805 main.go:141] libmachine: (ha-423356)     <rng model='virtio'>
	I0419 20:03:03.383408  388805 main.go:141] libmachine: (ha-423356)       <backend model='random'>/dev/random</backend>
	I0419 20:03:03.383415  388805 main.go:141] libmachine: (ha-423356)     </rng>
	I0419 20:03:03.383435  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.383451  388805 main.go:141] libmachine: (ha-423356)     
	I0419 20:03:03.383482  388805 main.go:141] libmachine: (ha-423356)   </devices>
	I0419 20:03:03.383513  388805 main.go:141] libmachine: (ha-423356) </domain>
	I0419 20:03:03.383528  388805 main.go:141] libmachine: (ha-423356) 
	I0419 20:03:03.387835  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:f6:54:bb in network default
	I0419 20:03:03.388422  388805 main.go:141] libmachine: (ha-423356) Ensuring networks are active...
	I0419 20:03:03.388449  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:03.389092  388805 main.go:141] libmachine: (ha-423356) Ensuring network default is active
	I0419 20:03:03.389474  388805 main.go:141] libmachine: (ha-423356) Ensuring network mk-ha-423356 is active
	I0419 20:03:03.390134  388805 main.go:141] libmachine: (ha-423356) Getting domain xml...
	I0419 20:03:03.390802  388805 main.go:141] libmachine: (ha-423356) Creating domain...
	I0419 20:03:04.577031  388805 main.go:141] libmachine: (ha-423356) Waiting to get IP...
	I0419 20:03:04.577798  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:04.578185  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:04.578209  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:04.578156  388829 retry.go:31] will retry after 210.348795ms: waiting for machine to come up
	I0419 20:03:04.790567  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:04.790982  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:04.791004  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:04.790929  388829 retry.go:31] will retry after 255.069257ms: waiting for machine to come up
	I0419 20:03:05.047393  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.047985  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.048013  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.047920  388829 retry.go:31] will retry after 326.769699ms: waiting for machine to come up
	I0419 20:03:05.376549  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.377013  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.377065  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.376974  388829 retry.go:31] will retry after 598.145851ms: waiting for machine to come up
	I0419 20:03:05.978098  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:05.978525  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:05.978554  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:05.978473  388829 retry.go:31] will retry after 554.446944ms: waiting for machine to come up
	I0419 20:03:06.534185  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:06.534587  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:06.534623  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:06.534531  388829 retry.go:31] will retry after 799.56022ms: waiting for machine to come up
	I0419 20:03:07.335546  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:07.336009  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:07.336047  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:07.335960  388829 retry.go:31] will retry after 879.93969ms: waiting for machine to come up
	I0419 20:03:08.217737  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:08.218181  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:08.218213  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:08.218117  388829 retry.go:31] will retry after 957.891913ms: waiting for machine to come up
	I0419 20:03:09.177275  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:09.177702  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:09.177730  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:09.177617  388829 retry.go:31] will retry after 1.611056854s: waiting for machine to come up
	I0419 20:03:10.791345  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:10.791761  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:10.791787  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:10.791715  388829 retry.go:31] will retry after 1.559858168s: waiting for machine to come up
	I0419 20:03:12.353627  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:12.354099  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:12.354127  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:12.354051  388829 retry.go:31] will retry after 2.452370558s: waiting for machine to come up
	I0419 20:03:14.808552  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:14.808997  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:14.809032  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:14.808931  388829 retry.go:31] will retry after 2.373368989s: waiting for machine to come up
	I0419 20:03:17.185465  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:17.185857  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:17.185879  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:17.185802  388829 retry.go:31] will retry after 2.994584556s: waiting for machine to come up
	I0419 20:03:20.181568  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:20.182034  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find current IP address of domain ha-423356 in network mk-ha-423356
	I0419 20:03:20.182060  388805 main.go:141] libmachine: (ha-423356) DBG | I0419 20:03:20.181998  388829 retry.go:31] will retry after 5.268532534s: waiting for machine to come up
	I0419 20:03:25.453727  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.454172  388805 main.go:141] libmachine: (ha-423356) Found IP for machine: 192.168.39.7
	I0419 20:03:25.454188  388805 main.go:141] libmachine: (ha-423356) Reserving static IP address...
	I0419 20:03:25.454241  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has current primary IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.454630  388805 main.go:141] libmachine: (ha-423356) DBG | unable to find host DHCP lease matching {name: "ha-423356", mac: "52:54:00:aa:25:62", ip: "192.168.39.7"} in network mk-ha-423356
	I0419 20:03:25.527744  388805 main.go:141] libmachine: (ha-423356) DBG | Getting to WaitForSSH function...
	I0419 20:03:25.527781  388805 main.go:141] libmachine: (ha-423356) Reserved static IP address: 192.168.39.7
	I0419 20:03:25.527795  388805 main.go:141] libmachine: (ha-423356) Waiting for SSH to be available...
	I0419 20:03:25.530520  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.530951  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.530973  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.531128  388805 main.go:141] libmachine: (ha-423356) DBG | Using SSH client type: external
	I0419 20:03:25.531152  388805 main.go:141] libmachine: (ha-423356) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa (-rw-------)
	I0419 20:03:25.531219  388805 main.go:141] libmachine: (ha-423356) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:03:25.531240  388805 main.go:141] libmachine: (ha-423356) DBG | About to run SSH command:
	I0419 20:03:25.531252  388805 main.go:141] libmachine: (ha-423356) DBG | exit 0
	I0419 20:03:25.656437  388805 main.go:141] libmachine: (ha-423356) DBG | SSH cmd err, output: <nil>: 
	I0419 20:03:25.656782  388805 main.go:141] libmachine: (ha-423356) KVM machine creation complete!
	I0419 20:03:25.657149  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:25.657693  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:25.657925  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:25.658087  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:03:25.658103  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:25.659437  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:03:25.659451  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:03:25.659457  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:03:25.659463  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.661516  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.661851  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.661884  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.662045  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.662248  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.662418  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.662549  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.662715  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.662998  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.663013  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:03:25.764216  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:03:25.764240  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:03:25.764248  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.766861  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.767236  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.767266  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.767407  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.767654  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.767798  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.767958  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.768108  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.768299  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.768315  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:03:25.869494  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:03:25.869590  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:03:25.869619  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:03:25.869635  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:25.869913  388805 buildroot.go:166] provisioning hostname "ha-423356"
	I0419 20:03:25.869943  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:25.870181  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.872906  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.873302  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.873347  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.873609  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.873801  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.873940  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.874129  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.874374  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.874580  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.874594  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356 && echo "ha-423356" | sudo tee /etc/hostname
	I0419 20:03:25.988738  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:03:25.988774  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:25.991681  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.992038  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:25.992076  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:25.992284  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:25.992502  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.992677  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:25.992810  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:25.992969  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:25.993214  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:25.993243  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:03:26.102598  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:03:26.102630  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:03:26.102692  388805 buildroot.go:174] setting up certificates
	I0419 20:03:26.102708  388805 provision.go:84] configureAuth start
	I0419 20:03:26.102720  388805 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:03:26.103049  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.105657  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.105970  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.105996  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.106174  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.108069  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.108385  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.108411  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.108679  388805 provision.go:143] copyHostCerts
	I0419 20:03:26.108713  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:03:26.108747  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:03:26.108755  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:03:26.108827  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:03:26.108902  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:03:26.108920  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:03:26.108925  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:03:26.108947  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:03:26.108998  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:03:26.109015  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:03:26.109021  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:03:26.109040  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:03:26.109091  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356 san=[127.0.0.1 192.168.39.7 ha-423356 localhost minikube]
	I0419 20:03:26.243241  388805 provision.go:177] copyRemoteCerts
	I0419 20:03:26.243311  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:03:26.243343  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.246005  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.246368  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.246399  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.246581  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.246759  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.246897  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.247067  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.329364  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:03:26.329433  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:03:26.356496  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:03:26.356592  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0419 20:03:26.383149  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:03:26.383227  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:03:26.409669  388805 provision.go:87] duration metric: took 306.947778ms to configureAuth
	I0419 20:03:26.409703  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:03:26.409899  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:26.409990  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.412507  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.412886  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.412916  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.413071  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.413258  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.413505  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.413685  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.413880  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:26.414040  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:26.414056  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:03:26.667960  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:03:26.668002  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:03:26.668014  388805 main.go:141] libmachine: (ha-423356) Calling .GetURL
	I0419 20:03:26.669354  388805 main.go:141] libmachine: (ha-423356) DBG | Using libvirt version 6000000
	I0419 20:03:26.671168  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.671463  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.671494  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.671606  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:03:26.671619  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:03:26.671637  388805 client.go:171] duration metric: took 23.735114952s to LocalClient.Create
	I0419 20:03:26.671669  388805 start.go:167] duration metric: took 23.735189159s to libmachine.API.Create "ha-423356"
	I0419 20:03:26.671683  388805 start.go:293] postStartSetup for "ha-423356" (driver="kvm2")
	I0419 20:03:26.671697  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:03:26.671722  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.671982  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:03:26.672004  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.673889  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.674176  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.674199  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.674325  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.674507  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.674654  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.674801  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.755868  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:03:26.760363  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:03:26.760391  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:03:26.760463  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:03:26.760584  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:03:26.760597  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:03:26.760760  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:03:26.770955  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:03:26.795442  388805 start.go:296] duration metric: took 123.744054ms for postStartSetup
	I0419 20:03:26.795493  388805 main.go:141] libmachine: (ha-423356) Calling .GetConfigRaw
	I0419 20:03:26.796118  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.798439  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.798783  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.798813  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.799060  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:26.799273  388805 start.go:128] duration metric: took 23.881442783s to createHost
	I0419 20:03:26.799304  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.801223  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.801563  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.801604  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.801701  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.801920  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.802096  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.802206  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.802326  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:03:26.802483  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:03:26.802498  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:03:26.901494  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557006.870713224
	
	I0419 20:03:26.901520  388805 fix.go:216] guest clock: 1713557006.870713224
	I0419 20:03:26.901528  388805 fix.go:229] Guest: 2024-04-19 20:03:26.870713224 +0000 UTC Remote: 2024-04-19 20:03:26.799288765 +0000 UTC m=+24.008813931 (delta=71.424459ms)
	I0419 20:03:26.901548  388805 fix.go:200] guest clock delta is within tolerance: 71.424459ms
	I0419 20:03:26.901553  388805 start.go:83] releasing machines lock for "ha-423356", held for 23.983850205s
	I0419 20:03:26.901571  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.901828  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:26.904520  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.904871  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.904901  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.905039  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905518  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905706  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:26.905778  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:03:26.905836  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.905960  388805 ssh_runner.go:195] Run: cat /version.json
	I0419 20:03:26.905982  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:26.908813  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.908839  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909187  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.909216  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909247  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:26.909263  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:26.909340  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.909512  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.909591  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:26.909673  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.909737  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:26.909796  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:26.909839  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:26.909968  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:27.016706  388805 ssh_runner.go:195] Run: systemctl --version
	I0419 20:03:27.023021  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:03:27.191643  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:03:27.198203  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:03:27.198270  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:03:27.215795  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:03:27.215821  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:03:27.215889  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:03:27.233540  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:03:27.247728  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:03:27.247781  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:03:27.261951  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:03:27.277027  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:03:27.398569  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:03:27.546683  388805 docker.go:233] disabling docker service ...
	I0419 20:03:27.546766  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:03:27.562620  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:03:27.576030  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:03:27.717594  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:03:27.854194  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:03:27.868446  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:03:27.887617  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:03:27.887707  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.898351  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:03:27.898419  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.908980  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.919914  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.930995  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:03:27.942383  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.953171  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.970421  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:03:27.981291  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:03:27.991035  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:03:27.991102  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:03:28.004245  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:03:28.014484  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:03:28.140460  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:03:28.279674  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:03:28.279758  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:03:28.284871  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:03:28.284929  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:03:28.288932  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:03:28.328993  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:03:28.329087  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:03:28.363050  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:03:28.399306  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:03:28.400656  388805 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:03:28.403203  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:28.403527  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:28.403556  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:28.403768  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:03:28.408153  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:03:28.422190  388805 kubeadm.go:877] updating cluster {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:03:28.422298  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:28.422341  388805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:03:28.456075  388805 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 20:03:28.456140  388805 ssh_runner.go:195] Run: which lz4
	I0419 20:03:28.460179  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0419 20:03:28.460272  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 20:03:28.464624  388805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 20:03:28.464667  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0419 20:03:29.977927  388805 crio.go:462] duration metric: took 1.517679112s to copy over tarball
	I0419 20:03:29.978041  388805 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 20:03:32.178372  388805 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200298609s)
	I0419 20:03:32.178401  388805 crio.go:469] duration metric: took 2.200430945s to extract the tarball
	I0419 20:03:32.178411  388805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 20:03:32.215944  388805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:03:32.264481  388805 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:03:32.264509  388805 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:03:32.264517  388805 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.30.0 crio true true} ...
	I0419 20:03:32.264624  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:03:32.264709  388805 ssh_runner.go:195] Run: crio config
	I0419 20:03:32.310485  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:32.310508  388805 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 20:03:32.310520  388805 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:03:32.310548  388805 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423356 NodeName:ha-423356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:03:32.310714  388805 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:03:32.310737  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:03:32.310776  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:03:32.330065  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:03:32.330225  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:03:32.330289  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:03:32.341007  388805 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:03:32.341086  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 20:03:32.351073  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0419 20:03:32.368450  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:03:32.385658  388805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0419 20:03:32.402827  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0419 20:03:32.420334  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:03:32.424256  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:03:32.437379  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:03:32.571957  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:03:32.590774  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.7
	I0419 20:03:32.590799  388805 certs.go:194] generating shared ca certs ...
	I0419 20:03:32.590816  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.590980  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:03:32.591038  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:03:32.591054  388805 certs.go:256] generating profile certs ...
	I0419 20:03:32.591113  388805 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:03:32.591128  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt with IP's: []
	I0419 20:03:32.723601  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt ...
	I0419 20:03:32.723629  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt: {Name:mk1bd2547d29de1d78dafadecadc8f6efc913cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.723795  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key ...
	I0419 20:03:32.723806  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key: {Name:mk1478e712eb8f185eb76d47c3f87d2afed17914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.723899  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b
	I0419 20:03:32.723920  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.254]
	I0419 20:03:32.847857  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b ...
	I0419 20:03:32.847890  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b: {Name:mk6196b7f125d4557863fc7da4b5e249cdadf91a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.848067  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b ...
	I0419 20:03:32.848086  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b: {Name:mk123597a78a2e3d0fb518f916030db99d125560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:32.848178  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.653a743b -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:03:32.848308  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.653a743b -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:03:32.848394  388805 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:03:32.848418  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt with IP's: []
	I0419 20:03:33.165191  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt ...
	I0419 20:03:33.165225  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt: {Name:mk42c5a4581b58f03d988ba5fb49cc746e3616fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:33.165379  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key ...
	I0419 20:03:33.165390  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key: {Name:mk5b2f3debab93ffe0190a67aac9b6bb8ea9000e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:33.165453  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:03:33.165470  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:03:33.165489  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:03:33.165502  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:03:33.165515  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:03:33.165532  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:03:33.165544  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:03:33.165555  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:03:33.165601  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:03:33.165668  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:03:33.165682  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:03:33.165702  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:03:33.165723  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:03:33.165740  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:03:33.165787  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:03:33.165819  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.165834  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.165846  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.166431  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:03:33.191562  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:03:33.216011  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:03:33.241838  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:03:33.268607  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 20:03:33.295119  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 20:03:33.321459  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:03:33.348313  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:03:33.385201  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:03:33.413507  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:03:33.441390  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:03:33.465663  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:03:33.482967  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:03:33.488869  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:03:33.500539  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.505625  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.505697  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:03:33.511579  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:03:33.523329  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:03:33.535160  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.539864  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.539930  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:03:33.545617  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:03:33.557185  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:03:33.568618  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.573237  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.573285  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:03:33.579041  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:03:33.590796  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:03:33.595490  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:03:33.595555  388805 kubeadm.go:391] StartCluster: {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:03:33.595644  388805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:03:33.595713  388805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:03:33.633360  388805 cri.go:89] found id: ""
	I0419 20:03:33.633455  388805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 20:03:33.644229  388805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 20:03:33.654631  388805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 20:03:33.664977  388805 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 20:03:33.665000  388805 kubeadm.go:156] found existing configuration files:
	
	I0419 20:03:33.665066  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 20:03:33.674806  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 20:03:33.674872  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 20:03:33.684963  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 20:03:33.694505  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 20:03:33.694568  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 20:03:33.704677  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 20:03:33.714350  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 20:03:33.714418  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 20:03:33.724611  388805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 20:03:33.734639  388805 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 20:03:33.734729  388805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 20:03:33.745192  388805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 20:03:33.847901  388805 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0419 20:03:33.848005  388805 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 20:03:33.973751  388805 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 20:03:33.973883  388805 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 20:03:33.974017  388805 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 20:03:34.222167  388805 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 20:03:34.340982  388805 out.go:204]   - Generating certificates and keys ...
	I0419 20:03:34.341096  388805 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 20:03:34.341176  388805 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 20:03:34.400334  388805 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 20:03:34.535679  388805 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 20:03:34.664392  388805 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 20:03:34.789170  388805 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 20:03:35.056390  388805 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 20:03:35.056568  388805 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-423356 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0419 20:03:35.117701  388805 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 20:03:35.117850  388805 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-423356 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0419 20:03:35.286285  388805 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 20:03:35.440213  388805 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 20:03:35.647864  388805 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 20:03:35.648137  388805 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 20:03:35.822817  388805 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 20:03:36.067883  388805 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0419 20:03:36.377307  388805 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 20:03:36.527837  388805 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 20:03:36.808888  388805 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 20:03:36.809489  388805 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 20:03:36.812492  388805 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 20:03:36.814740  388805 out.go:204]   - Booting up control plane ...
	I0419 20:03:36.814931  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 20:03:36.815078  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 20:03:36.815206  388805 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 20:03:36.836621  388805 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 20:03:36.837036  388805 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 20:03:36.837097  388805 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 20:03:36.976652  388805 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0419 20:03:36.976785  388805 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0419 20:03:37.978129  388805 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002406944s
	I0419 20:03:37.978214  388805 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0419 20:03:43.923411  388805 kubeadm.go:309] [api-check] The API server is healthy after 5.949629263s
	I0419 20:03:43.936942  388805 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 20:03:43.953542  388805 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 20:03:43.981212  388805 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 20:03:43.981467  388805 kubeadm.go:309] [mark-control-plane] Marking the node ha-423356 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 20:03:43.994804  388805 kubeadm.go:309] [bootstrap-token] Using token: awd3b6.qij36bhfjtodtmhg
	I0419 20:03:43.996377  388805 out.go:204]   - Configuring RBAC rules ...
	I0419 20:03:43.996503  388805 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 20:03:44.001330  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 20:03:44.009538  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 20:03:44.013911  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 20:03:44.020585  388805 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 20:03:44.027243  388805 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 20:03:44.330976  388805 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 20:03:44.765900  388805 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 20:03:45.331489  388805 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 20:03:45.332459  388805 kubeadm.go:309] 
	I0419 20:03:45.332576  388805 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 20:03:45.332597  388805 kubeadm.go:309] 
	I0419 20:03:45.332707  388805 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 20:03:45.332728  388805 kubeadm.go:309] 
	I0419 20:03:45.332770  388805 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 20:03:45.332856  388805 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 20:03:45.332935  388805 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 20:03:45.333017  388805 kubeadm.go:309] 
	I0419 20:03:45.333112  388805 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 20:03:45.333123  388805 kubeadm.go:309] 
	I0419 20:03:45.333220  388805 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 20:03:45.333230  388805 kubeadm.go:309] 
	I0419 20:03:45.333405  388805 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 20:03:45.333546  388805 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 20:03:45.333649  388805 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 20:03:45.333659  388805 kubeadm.go:309] 
	I0419 20:03:45.333763  388805 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 20:03:45.333876  388805 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 20:03:45.333898  388805 kubeadm.go:309] 
	I0419 20:03:45.334005  388805 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token awd3b6.qij36bhfjtodtmhg \
	I0419 20:03:45.334149  388805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea \
	I0419 20:03:45.334184  388805 kubeadm.go:309] 	--control-plane 
	I0419 20:03:45.334188  388805 kubeadm.go:309] 
	I0419 20:03:45.334306  388805 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 20:03:45.334322  388805 kubeadm.go:309] 
	I0419 20:03:45.334461  388805 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token awd3b6.qij36bhfjtodtmhg \
	I0419 20:03:45.334597  388805 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea 
	I0419 20:03:45.335016  388805 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 20:03:45.335039  388805 cni.go:84] Creating CNI manager for ""
	I0419 20:03:45.335045  388805 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 20:03:45.337136  388805 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0419 20:03:45.338560  388805 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0419 20:03:45.344280  388805 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0419 20:03:45.344296  388805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0419 20:03:45.364073  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0419 20:03:45.698100  388805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 20:03:45.698155  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:45.698178  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356 minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=true
	I0419 20:03:45.731774  388805 ops.go:34] apiserver oom_adj: -16
	I0419 20:03:45.891906  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:46.392606  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:46.892887  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:47.392996  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:47.892402  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:48.392811  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:48.892887  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:49.392218  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:49.892060  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:50.392411  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:50.892506  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:51.392771  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:51.892571  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:52.392019  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:52.892863  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:53.392557  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:53.892046  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:54.392147  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:54.892479  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:55.392551  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:55.892291  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:56.392509  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:56.892103  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:57.392112  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 20:03:57.519848  388805 kubeadm.go:1107] duration metric: took 11.82176059s to wait for elevateKubeSystemPrivileges
	W0419 20:03:57.519891  388805 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 20:03:57.519904  388805 kubeadm.go:393] duration metric: took 23.924352566s to StartCluster
	I0419 20:03:57.519957  388805 settings.go:142] acquiring lock: {Name:mk4d89c3e562693d551452a3da7ca47ff322d54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:57.520065  388805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:57.520932  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/kubeconfig: {Name:mk754e069328c06a767f4b9e66452a46be84b49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:03:57.521167  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0419 20:03:57.521185  388805 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:57.521220  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:03:57.521226  388805 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 20:03:57.521307  388805 addons.go:69] Setting storage-provisioner=true in profile "ha-423356"
	I0419 20:03:57.521311  388805 addons.go:69] Setting default-storageclass=true in profile "ha-423356"
	I0419 20:03:57.521360  388805 addons.go:234] Setting addon storage-provisioner=true in "ha-423356"
	I0419 20:03:57.521375  388805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-423356"
	I0419 20:03:57.521396  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:03:57.521441  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:57.521810  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.521845  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.521851  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.521895  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.537376  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0419 20:03:57.537391  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0419 20:03:57.537956  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.538029  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.538456  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.538477  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.538613  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.538641  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.538826  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.538976  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.539141  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.539375  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.539408  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.541320  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:03:57.541691  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 20:03:57.542321  388805 cert_rotation.go:137] Starting client certificate rotation controller
	I0419 20:03:57.542614  388805 addons.go:234] Setting addon default-storageclass=true in "ha-423356"
	I0419 20:03:57.542664  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:03:57.543048  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.543082  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.555711  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0419 20:03:57.556204  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.556760  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.556803  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.557262  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.557536  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.558692  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0419 20:03:57.559305  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.559398  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:57.561650  388805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:03:57.559926  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.562849  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.562963  388805 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 20:03:57.562980  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 20:03:57.562997  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:57.563264  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.563810  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:57.563863  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:57.566406  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.566855  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:57.566893  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.567012  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:57.567206  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:57.567375  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:57.567554  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:57.579256  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0419 20:03:57.579707  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:57.580185  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:57.580208  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:57.580578  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:57.580773  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:03:57.582416  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:03:57.582692  388805 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 20:03:57.582709  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 20:03:57.582729  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:03:57.585565  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.585964  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:03:57.585991  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:03:57.586245  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:03:57.586426  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:03:57.586607  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:03:57.586739  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:03:57.666618  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0419 20:03:57.740900  388805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 20:03:57.770575  388805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 20:03:58.177522  388805 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0419 20:03:58.430908  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.430935  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.430979  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431008  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431256  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431274  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431283  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431291  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431300  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431302  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431326  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431339  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431347  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.431358  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.431568  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431583  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431627  388805 main.go:141] libmachine: (ha-423356) DBG | Closing plugin on server side
	I0419 20:03:58.431654  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.431665  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.431714  388805 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0419 20:03:58.431733  388805 round_trippers.go:469] Request Headers:
	I0419 20:03:58.431744  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:03:58.431752  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:03:58.444053  388805 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0419 20:03:58.444656  388805 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0419 20:03:58.444676  388805 round_trippers.go:469] Request Headers:
	I0419 20:03:58.444691  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:03:58.444695  388805 round_trippers.go:473]     Content-Type: application/json
	I0419 20:03:58.444698  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:03:58.453924  388805 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 20:03:58.454109  388805 main.go:141] libmachine: Making call to close driver server
	I0419 20:03:58.454123  388805 main.go:141] libmachine: (ha-423356) Calling .Close
	I0419 20:03:58.454460  388805 main.go:141] libmachine: Successfully made call to close driver server
	I0419 20:03:58.454479  388805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0419 20:03:58.456055  388805 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0419 20:03:58.457291  388805 addons.go:505] duration metric: took 936.060266ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0419 20:03:58.457353  388805 start.go:245] waiting for cluster config update ...
	I0419 20:03:58.457373  388805 start.go:254] writing updated cluster config ...
	I0419 20:03:58.459228  388805 out.go:177] 
	I0419 20:03:58.460969  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:03:58.461046  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:58.462709  388805 out.go:177] * Starting "ha-423356-m02" control-plane node in "ha-423356" cluster
	I0419 20:03:58.463880  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:03:58.463909  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:03:58.464002  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:03:58.464014  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:03:58.464081  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:03:58.464231  388805 start.go:360] acquireMachinesLock for ha-423356-m02: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:03:58.464277  388805 start.go:364] duration metric: took 27.912µs to acquireMachinesLock for "ha-423356-m02"
	I0419 20:03:58.464321  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:03:58.464397  388805 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0419 20:03:58.465857  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:03:58.465964  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:03:58.465995  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:03:58.480622  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0419 20:03:58.481060  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:03:58.481567  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:03:58.481595  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:03:58.481931  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:03:58.482144  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:03:58.482301  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:03:58.482550  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:03:58.482573  388805 client.go:168] LocalClient.Create starting
	I0419 20:03:58.482602  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:03:58.482733  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:58.483131  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:58.483279  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:03:58.483327  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:03:58.483344  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:03:58.483380  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:03:58.483392  388805 main.go:141] libmachine: (ha-423356-m02) Calling .PreCreateCheck
	I0419 20:03:58.483705  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:03:58.484286  388805 main.go:141] libmachine: Creating machine...
	I0419 20:03:58.484306  388805 main.go:141] libmachine: (ha-423356-m02) Calling .Create
	I0419 20:03:58.484536  388805 main.go:141] libmachine: (ha-423356-m02) Creating KVM machine...
	I0419 20:03:58.486150  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found existing default KVM network
	I0419 20:03:58.486180  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found existing private KVM network mk-ha-423356
	I0419 20:03:58.486328  388805 main.go:141] libmachine: (ha-423356-m02) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 ...
	I0419 20:03:58.486371  388805 main.go:141] libmachine: (ha-423356-m02) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:03:58.486390  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.486295  389210 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:58.486526  388805 main.go:141] libmachine: (ha-423356-m02) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:03:58.735178  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.735058  389210 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa...
	I0419 20:03:58.856065  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.855909  389210 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/ha-423356-m02.rawdisk...
	I0419 20:03:58.856105  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Writing magic tar header
	I0419 20:03:58.856121  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Writing SSH key tar header
	I0419 20:03:58.856134  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:03:58.856019  389210 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 ...
	I0419 20:03:58.856169  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02
	I0419 20:03:58.856198  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02 (perms=drwx------)
	I0419 20:03:58.856208  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:03:58.856225  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:03:58.856237  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:03:58.856252  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:03:58.856264  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:03:58.856280  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:03:58.856290  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Checking permissions on dir: /home
	I0419 20:03:58.856300  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Skipping /home - not owner
	I0419 20:03:58.856311  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:03:58.856325  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:03:58.856337  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:03:58.856350  388805 main.go:141] libmachine: (ha-423356-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:03:58.856360  388805 main.go:141] libmachine: (ha-423356-m02) Creating domain...
	I0419 20:03:58.857502  388805 main.go:141] libmachine: (ha-423356-m02) define libvirt domain using xml: 
	I0419 20:03:58.857536  388805 main.go:141] libmachine: (ha-423356-m02) <domain type='kvm'>
	I0419 20:03:58.857548  388805 main.go:141] libmachine: (ha-423356-m02)   <name>ha-423356-m02</name>
	I0419 20:03:58.857557  388805 main.go:141] libmachine: (ha-423356-m02)   <memory unit='MiB'>2200</memory>
	I0419 20:03:58.857566  388805 main.go:141] libmachine: (ha-423356-m02)   <vcpu>2</vcpu>
	I0419 20:03:58.857573  388805 main.go:141] libmachine: (ha-423356-m02)   <features>
	I0419 20:03:58.857582  388805 main.go:141] libmachine: (ha-423356-m02)     <acpi/>
	I0419 20:03:58.857593  388805 main.go:141] libmachine: (ha-423356-m02)     <apic/>
	I0419 20:03:58.857601  388805 main.go:141] libmachine: (ha-423356-m02)     <pae/>
	I0419 20:03:58.857610  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.857619  388805 main.go:141] libmachine: (ha-423356-m02)   </features>
	I0419 20:03:58.857630  388805 main.go:141] libmachine: (ha-423356-m02)   <cpu mode='host-passthrough'>
	I0419 20:03:58.857641  388805 main.go:141] libmachine: (ha-423356-m02)   
	I0419 20:03:58.857648  388805 main.go:141] libmachine: (ha-423356-m02)   </cpu>
	I0419 20:03:58.857687  388805 main.go:141] libmachine: (ha-423356-m02)   <os>
	I0419 20:03:58.857714  388805 main.go:141] libmachine: (ha-423356-m02)     <type>hvm</type>
	I0419 20:03:58.857738  388805 main.go:141] libmachine: (ha-423356-m02)     <boot dev='cdrom'/>
	I0419 20:03:58.857750  388805 main.go:141] libmachine: (ha-423356-m02)     <boot dev='hd'/>
	I0419 20:03:58.857760  388805 main.go:141] libmachine: (ha-423356-m02)     <bootmenu enable='no'/>
	I0419 20:03:58.857771  388805 main.go:141] libmachine: (ha-423356-m02)   </os>
	I0419 20:03:58.857779  388805 main.go:141] libmachine: (ha-423356-m02)   <devices>
	I0419 20:03:58.857792  388805 main.go:141] libmachine: (ha-423356-m02)     <disk type='file' device='cdrom'>
	I0419 20:03:58.857808  388805 main.go:141] libmachine: (ha-423356-m02)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/boot2docker.iso'/>
	I0419 20:03:58.857820  388805 main.go:141] libmachine: (ha-423356-m02)       <target dev='hdc' bus='scsi'/>
	I0419 20:03:58.857828  388805 main.go:141] libmachine: (ha-423356-m02)       <readonly/>
	I0419 20:03:58.857839  388805 main.go:141] libmachine: (ha-423356-m02)     </disk>
	I0419 20:03:58.857852  388805 main.go:141] libmachine: (ha-423356-m02)     <disk type='file' device='disk'>
	I0419 20:03:58.857862  388805 main.go:141] libmachine: (ha-423356-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:03:58.857958  388805 main.go:141] libmachine: (ha-423356-m02)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/ha-423356-m02.rawdisk'/>
	I0419 20:03:58.858001  388805 main.go:141] libmachine: (ha-423356-m02)       <target dev='hda' bus='virtio'/>
	I0419 20:03:58.858037  388805 main.go:141] libmachine: (ha-423356-m02)     </disk>
	I0419 20:03:58.858062  388805 main.go:141] libmachine: (ha-423356-m02)     <interface type='network'>
	I0419 20:03:58.858078  388805 main.go:141] libmachine: (ha-423356-m02)       <source network='mk-ha-423356'/>
	I0419 20:03:58.858089  388805 main.go:141] libmachine: (ha-423356-m02)       <model type='virtio'/>
	I0419 20:03:58.858101  388805 main.go:141] libmachine: (ha-423356-m02)     </interface>
	I0419 20:03:58.858109  388805 main.go:141] libmachine: (ha-423356-m02)     <interface type='network'>
	I0419 20:03:58.858122  388805 main.go:141] libmachine: (ha-423356-m02)       <source network='default'/>
	I0419 20:03:58.858137  388805 main.go:141] libmachine: (ha-423356-m02)       <model type='virtio'/>
	I0419 20:03:58.858149  388805 main.go:141] libmachine: (ha-423356-m02)     </interface>
	I0419 20:03:58.858160  388805 main.go:141] libmachine: (ha-423356-m02)     <serial type='pty'>
	I0419 20:03:58.858170  388805 main.go:141] libmachine: (ha-423356-m02)       <target port='0'/>
	I0419 20:03:58.858181  388805 main.go:141] libmachine: (ha-423356-m02)     </serial>
	I0419 20:03:58.858192  388805 main.go:141] libmachine: (ha-423356-m02)     <console type='pty'>
	I0419 20:03:58.858203  388805 main.go:141] libmachine: (ha-423356-m02)       <target type='serial' port='0'/>
	I0419 20:03:58.858211  388805 main.go:141] libmachine: (ha-423356-m02)     </console>
	I0419 20:03:58.858222  388805 main.go:141] libmachine: (ha-423356-m02)     <rng model='virtio'>
	I0419 20:03:58.858276  388805 main.go:141] libmachine: (ha-423356-m02)       <backend model='random'>/dev/random</backend>
	I0419 20:03:58.858302  388805 main.go:141] libmachine: (ha-423356-m02)     </rng>
	I0419 20:03:58.858312  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.858319  388805 main.go:141] libmachine: (ha-423356-m02)     
	I0419 20:03:58.858329  388805 main.go:141] libmachine: (ha-423356-m02)   </devices>
	I0419 20:03:58.858335  388805 main.go:141] libmachine: (ha-423356-m02) </domain>
	I0419 20:03:58.858346  388805 main.go:141] libmachine: (ha-423356-m02) 
	I0419 20:03:58.864683  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:a2:f4:56 in network default
	I0419 20:03:58.865277  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring networks are active...
	I0419 20:03:58.865331  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:03:58.866010  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring network default is active
	I0419 20:03:58.866409  388805 main.go:141] libmachine: (ha-423356-m02) Ensuring network mk-ha-423356 is active
	I0419 20:03:58.866813  388805 main.go:141] libmachine: (ha-423356-m02) Getting domain xml...
	I0419 20:03:58.867785  388805 main.go:141] libmachine: (ha-423356-m02) Creating domain...
	I0419 20:04:00.103380  388805 main.go:141] libmachine: (ha-423356-m02) Waiting to get IP...
	I0419 20:04:00.104279  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.104655  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.104682  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.104617  389210 retry.go:31] will retry after 301.988537ms: waiting for machine to come up
	I0419 20:04:00.408594  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.409195  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.409225  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.409137  389210 retry.go:31] will retry after 329.946651ms: waiting for machine to come up
	I0419 20:04:00.740941  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:00.741447  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:00.741476  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:00.741399  389210 retry.go:31] will retry after 366.125678ms: waiting for machine to come up
	I0419 20:04:01.109032  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:01.109524  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:01.109552  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:01.109480  389210 retry.go:31] will retry after 439.45473ms: waiting for machine to come up
	I0419 20:04:01.550810  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:01.551168  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:01.551197  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:01.551123  389210 retry.go:31] will retry after 532.55463ms: waiting for machine to come up
	I0419 20:04:02.085482  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:02.085969  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:02.086006  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:02.085871  389210 retry.go:31] will retry after 914.829151ms: waiting for machine to come up
	I0419 20:04:03.003220  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:03.003698  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:03.003725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:03.003670  389210 retry.go:31] will retry after 876.494824ms: waiting for machine to come up
	I0419 20:04:03.881855  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:03.882385  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:03.882420  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:03.882332  389210 retry.go:31] will retry after 909.993683ms: waiting for machine to come up
	I0419 20:04:04.793769  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:04.794244  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:04.794283  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:04.794178  389210 retry.go:31] will retry after 1.551125756s: waiting for machine to come up
	I0419 20:04:06.347880  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:06.348387  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:06.348417  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:06.348339  389210 retry.go:31] will retry after 1.808278203s: waiting for machine to come up
	I0419 20:04:08.159309  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:08.159801  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:08.159830  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:08.159762  389210 retry.go:31] will retry after 2.259690381s: waiting for machine to come up
	I0419 20:04:10.421816  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:10.422252  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:10.422283  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:10.422207  389210 retry.go:31] will retry after 2.687448152s: waiting for machine to come up
	I0419 20:04:13.112750  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:13.113160  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:13.113185  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:13.113125  389210 retry.go:31] will retry after 3.825664412s: waiting for machine to come up
	I0419 20:04:16.941639  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:16.942275  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find current IP address of domain ha-423356-m02 in network mk-ha-423356
	I0419 20:04:16.942306  388805 main.go:141] libmachine: (ha-423356-m02) DBG | I0419 20:04:16.942220  389210 retry.go:31] will retry after 3.97876348s: waiting for machine to come up
	I0419 20:04:20.922725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.923228  388805 main.go:141] libmachine: (ha-423356-m02) Found IP for machine: 192.168.39.121
	I0419 20:04:20.923258  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has current primary IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.923266  388805 main.go:141] libmachine: (ha-423356-m02) Reserving static IP address...
	I0419 20:04:20.923543  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find host DHCP lease matching {name: "ha-423356-m02", mac: "52:54:00:1e:9f:96", ip: "192.168.39.121"} in network mk-ha-423356
	I0419 20:04:20.996198  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Getting to WaitForSSH function...
	I0419 20:04:20.996229  388805 main.go:141] libmachine: (ha-423356-m02) Reserved static IP address: 192.168.39.121
	I0419 20:04:20.996246  388805 main.go:141] libmachine: (ha-423356-m02) Waiting for SSH to be available...
	I0419 20:04:20.998614  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:20.998954  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356
	I0419 20:04:20.998984  388805 main.go:141] libmachine: (ha-423356-m02) DBG | unable to find defined IP address of network mk-ha-423356 interface with MAC address 52:54:00:1e:9f:96
	I0419 20:04:20.999160  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH client type: external
	I0419 20:04:20.999182  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa (-rw-------)
	I0419 20:04:20.999204  388805 main.go:141] libmachine: (ha-423356-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:04:20.999217  388805 main.go:141] libmachine: (ha-423356-m02) DBG | About to run SSH command:
	I0419 20:04:20.999226  388805 main.go:141] libmachine: (ha-423356-m02) DBG | exit 0
	I0419 20:04:21.002750  388805 main.go:141] libmachine: (ha-423356-m02) DBG | SSH cmd err, output: exit status 255: 
	I0419 20:04:21.002777  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0419 20:04:21.002787  388805 main.go:141] libmachine: (ha-423356-m02) DBG | command : exit 0
	I0419 20:04:21.002794  388805 main.go:141] libmachine: (ha-423356-m02) DBG | err     : exit status 255
	I0419 20:04:21.002804  388805 main.go:141] libmachine: (ha-423356-m02) DBG | output  : 
	I0419 20:04:24.003077  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Getting to WaitForSSH function...
	I0419 20:04:24.005617  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.006044  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.006082  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.006267  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH client type: external
	I0419 20:04:24.006289  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa (-rw-------)
	I0419 20:04:24.006321  388805 main.go:141] libmachine: (ha-423356-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:04:24.006335  388805 main.go:141] libmachine: (ha-423356-m02) DBG | About to run SSH command:
	I0419 20:04:24.006347  388805 main.go:141] libmachine: (ha-423356-m02) DBG | exit 0
	I0419 20:04:24.132672  388805 main.go:141] libmachine: (ha-423356-m02) DBG | SSH cmd err, output: <nil>: 
	I0419 20:04:24.132980  388805 main.go:141] libmachine: (ha-423356-m02) KVM machine creation complete!
	I0419 20:04:24.133286  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:04:24.133877  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:24.134108  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:24.134297  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:04:24.134311  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:04:24.135689  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:04:24.135708  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:04:24.135716  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:04:24.135724  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.138624  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.139008  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.139053  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.139188  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.139379  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.139544  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.139718  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.139900  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.140110  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.140122  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:04:24.252119  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:04:24.252150  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:04:24.252163  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.255034  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.255430  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.255464  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.255566  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.255800  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.255957  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.256083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.256231  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.256462  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.256475  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:04:24.373877  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:04:24.373955  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:04:24.373965  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:04:24.373977  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.374255  388805 buildroot.go:166] provisioning hostname "ha-423356-m02"
	I0419 20:04:24.374292  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.374491  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.377249  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.377560  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.377586  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.377725  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.377916  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.378083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.378237  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.378452  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.378673  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.378688  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356-m02 && echo "ha-423356-m02" | sudo tee /etc/hostname
	I0419 20:04:24.507630  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356-m02
	
	I0419 20:04:24.507662  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.510376  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.510725  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.510753  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.510945  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.511149  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.511305  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.511436  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.511661  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.511893  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.511917  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:04:24.640048  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:04:24.640084  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:04:24.640106  388805 buildroot.go:174] setting up certificates
	I0419 20:04:24.640117  388805 provision.go:84] configureAuth start
	I0419 20:04:24.640126  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetMachineName
	I0419 20:04:24.640458  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:24.643287  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.643718  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.643749  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.643879  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.646037  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.646425  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.646460  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.646561  388805 provision.go:143] copyHostCerts
	I0419 20:04:24.646602  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:04:24.646646  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:04:24.646656  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:04:24.646735  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:04:24.646844  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:04:24.646872  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:04:24.646882  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:04:24.646928  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:04:24.646994  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:04:24.647014  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:04:24.647023  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:04:24.647059  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:04:24.647159  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356-m02 san=[127.0.0.1 192.168.39.121 ha-423356-m02 localhost minikube]
	I0419 20:04:24.759734  388805 provision.go:177] copyRemoteCerts
	I0419 20:04:24.759806  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:04:24.759838  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.762761  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.763115  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.763155  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.763329  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.763577  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.763820  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.764004  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:24.850831  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:04:24.850902  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:04:24.877207  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:04:24.877283  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 20:04:24.904395  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:04:24.904486  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:04:24.931218  388805 provision.go:87] duration metric: took 291.084326ms to configureAuth
	I0419 20:04:24.931255  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:04:24.931510  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:24.931604  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:24.933978  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.934300  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:24.934333  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:24.934484  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:24.934740  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.934923  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:24.935083  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:24.935258  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:24.935426  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:24.935441  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:04:25.205312  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:04:25.205345  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:04:25.205355  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetURL
	I0419 20:04:25.206877  388805 main.go:141] libmachine: (ha-423356-m02) DBG | Using libvirt version 6000000
	I0419 20:04:25.209278  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.209606  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.209638  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.209756  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:04:25.209784  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:04:25.209792  388805 client.go:171] duration metric: took 26.727212066s to LocalClient.Create
	I0419 20:04:25.209824  388805 start.go:167] duration metric: took 26.72727434s to libmachine.API.Create "ha-423356"
	I0419 20:04:25.209838  388805 start.go:293] postStartSetup for "ha-423356-m02" (driver="kvm2")
	I0419 20:04:25.209851  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:04:25.209895  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.210140  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:04:25.210180  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.212346  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.212723  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.212751  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.212910  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.213100  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.213312  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.213471  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.299433  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:04:25.303605  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:04:25.303629  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:04:25.303688  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:04:25.303760  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:04:25.303771  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:04:25.303848  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:04:25.313106  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:04:25.339239  388805 start.go:296] duration metric: took 129.382003ms for postStartSetup
	I0419 20:04:25.339310  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetConfigRaw
	I0419 20:04:25.340042  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:25.343152  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.343551  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.343575  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.343877  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:04:25.344115  388805 start.go:128] duration metric: took 26.879707029s to createHost
	I0419 20:04:25.344145  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.346668  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.347031  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.347061  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.347199  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.347390  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.347578  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.347732  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.347878  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:04:25.348068  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0419 20:04:25.348082  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:04:25.462004  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557065.438264488
	
	I0419 20:04:25.462031  388805 fix.go:216] guest clock: 1713557065.438264488
	I0419 20:04:25.462043  388805 fix.go:229] Guest: 2024-04-19 20:04:25.438264488 +0000 UTC Remote: 2024-04-19 20:04:25.34413101 +0000 UTC m=+82.553656179 (delta=94.133478ms)
	I0419 20:04:25.462065  388805 fix.go:200] guest clock delta is within tolerance: 94.133478ms
	I0419 20:04:25.462074  388805 start.go:83] releasing machines lock for "ha-423356-m02", held for 26.997761469s
	I0419 20:04:25.462094  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.462403  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:25.465241  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.465622  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.465647  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.467808  388805 out.go:177] * Found network options:
	I0419 20:04:25.469538  388805 out.go:177]   - NO_PROXY=192.168.39.7
	W0419 20:04:25.470899  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:04:25.470952  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471544  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471736  388805 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:04:25.471854  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:04:25.471892  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	W0419 20:04:25.471975  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:04:25.472056  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:04:25.472079  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:04:25.474681  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.474921  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475069  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.475097  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475198  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.475373  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:25.475396  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.475405  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:25.475532  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.475642  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:04:25.475738  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.475779  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:04:25.475901  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:04:25.476062  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:04:25.719884  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:04:25.726092  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:04:25.726171  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:04:25.744358  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:04:25.744385  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:04:25.744462  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:04:25.761288  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:04:25.776675  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:04:25.776741  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:04:25.791962  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:04:25.807595  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:04:25.932781  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:04:26.098054  388805 docker.go:233] disabling docker service ...
	I0419 20:04:26.098156  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:04:26.113552  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:04:26.127664  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:04:26.264626  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:04:26.401744  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:04:26.425709  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:04:26.444838  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:04:26.444908  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.456338  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:04:26.456415  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.467527  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.478697  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.490090  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:04:26.501627  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.512690  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.530120  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:04:26.541203  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:04:26.551166  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:04:26.551233  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:04:26.565454  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:04:26.575513  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:26.699963  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:04:26.840670  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:04:26.840742  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:04:26.845752  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:04:26.845820  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:04:26.849940  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:04:26.886986  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:04:26.887081  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:04:26.917238  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:04:26.949136  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:04:26.950646  388805 out.go:177]   - env NO_PROXY=192.168.39.7
	I0419 20:04:26.951909  388805 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:04:26.954523  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:26.954827  388805 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:04:13 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:04:26.954856  388805 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:04:26.955133  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:04:26.959642  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:04:26.973554  388805 mustload.go:65] Loading cluster: ha-423356
	I0419 20:04:26.973817  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:26.974219  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:26.974286  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:26.988968  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0419 20:04:26.989468  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:26.989958  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:26.989980  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:26.990257  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:26.990426  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:04:26.991803  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:04:26.992160  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:26.992197  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:27.006957  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0419 20:04:27.007362  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:27.007780  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:27.007801  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:27.008094  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:27.008283  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:04:27.008439  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.121
	I0419 20:04:27.008456  388805 certs.go:194] generating shared ca certs ...
	I0419 20:04:27.008472  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.008602  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:04:27.008670  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:04:27.008683  388805 certs.go:256] generating profile certs ...
	I0419 20:04:27.008756  388805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:04:27.008780  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109
	I0419 20:04:27.008793  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.254]
	I0419 20:04:27.112885  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 ...
	I0419 20:04:27.112916  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109: {Name:mk4864f27249bc288f458043f35d6f5de535ec40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.113085  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109 ...
	I0419 20:04:27.113102  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109: {Name:mk25afeda6db79edfc338a633462b0b1fad5f92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:04:27.113171  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.d7e84109 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:04:27.113300  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.d7e84109 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:04:27.113426  388805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:04:27.113444  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:04:27.113462  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:04:27.113475  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:04:27.113486  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:04:27.113496  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:04:27.113506  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:04:27.113526  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:04:27.113538  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:04:27.113584  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:04:27.113612  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:04:27.113622  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:04:27.113641  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:04:27.113661  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:04:27.113685  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:04:27.113727  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:04:27.113755  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.113769  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.113785  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.113817  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:04:27.117111  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:27.117538  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:04:27.117573  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:27.117790  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:04:27.118005  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:04:27.118157  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:04:27.118326  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:04:27.189115  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 20:04:27.194749  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 20:04:27.206481  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 20:04:27.211441  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0419 20:04:27.222812  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 20:04:27.227348  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 20:04:27.238181  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 20:04:27.242239  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 20:04:27.253430  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 20:04:27.257901  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 20:04:27.269371  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 20:04:27.273332  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0419 20:04:27.284451  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:04:27.311712  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:04:27.339004  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:04:27.365881  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:04:27.391152  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0419 20:04:27.416177  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:04:27.440792  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:04:27.465655  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:04:27.490454  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:04:27.515673  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:04:27.540428  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:04:27.565533  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 20:04:27.582912  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0419 20:04:27.599989  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 20:04:27.618147  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 20:04:27.635953  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 20:04:27.653125  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0419 20:04:27.669991  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0419 20:04:27.687066  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:04:27.692808  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:04:27.704737  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.709286  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.709347  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:04:27.715005  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:04:27.727274  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:04:27.739271  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.744062  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.744141  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:04:27.750024  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:04:27.762646  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:04:27.775057  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.779743  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.779803  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:04:27.785629  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:04:27.797114  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:04:27.801329  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:04:27.801377  388805 kubeadm.go:928] updating node {m02 192.168.39.121 8443 v1.30.0 crio true true} ...
	I0419 20:04:27.801458  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:04:27.801483  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:04:27.801516  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:04:27.822225  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:04:27.822294  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:04:27.822359  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:04:27.834421  388805 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 20:04:27.834487  388805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 20:04:27.845841  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 20:04:27.845857  388805 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0419 20:04:27.847428  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:04:27.845909  388805 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0419 20:04:27.847520  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:04:27.852828  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 20:04:27.852861  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 20:04:28.645040  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:04:28.645122  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:04:28.650303  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 20:04:28.650332  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 20:04:29.207327  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:04:29.223643  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:04:29.223751  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:04:29.228428  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 20:04:29.228474  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 20:04:29.669767  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 20:04:29.679434  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 20:04:29.696503  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:04:29.713269  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:04:29.730434  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:04:29.734296  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:04:29.746392  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:29.861502  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:04:29.878810  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:04:29.879257  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:04:29.879314  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:04:29.897676  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0419 20:04:29.898109  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:04:29.898817  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:04:29.898848  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:04:29.899195  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:04:29.899446  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:04:29.899613  388805 start.go:316] joinCluster: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:04:29.899780  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 20:04:29.899809  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:04:29.902828  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:29.903241  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:04:29.903274  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:04:29.903488  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:04:29.903657  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:04:29.903816  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:04:29.903975  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:04:30.056745  388805 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:04:30.056794  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c842jz.8vslh2ec722m2dzi --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m02 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443"
	I0419 20:04:52.832830  388805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c842jz.8vslh2ec722m2dzi --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m02 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443": (22.776004793s)
	I0419 20:04:52.832907  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 20:04:53.461792  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356-m02 minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=false
	I0419 20:04:53.593929  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-423356-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 20:04:53.708722  388805 start.go:318] duration metric: took 23.809094342s to joinCluster
	I0419 20:04:53.708808  388805 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:04:53.710449  388805 out.go:177] * Verifying Kubernetes components...
	I0419 20:04:53.709161  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:04:53.711797  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:04:54.003770  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:04:54.056560  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:04:54.056922  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 20:04:54.057003  388805 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0419 20:04:54.057371  388805 node_ready.go:35] waiting up to 6m0s for node "ha-423356-m02" to be "Ready" ...
	I0419 20:04:54.057519  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:54.057529  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:54.057539  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:54.057545  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:54.066381  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:04:54.557618  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:54.557650  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:54.557663  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:54.557670  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:54.565308  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:04:55.057966  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:55.057994  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:55.058006  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:55.058013  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:55.061621  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:55.557695  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:55.557725  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:55.557737  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:55.557743  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:55.563203  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:04:56.058415  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:56.058439  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:56.058455  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:56.058459  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:56.061535  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:56.062152  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:04:56.558591  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:56.558615  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:56.558627  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:56.558631  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:56.562428  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:57.058414  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:57.058438  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:57.058446  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:57.058451  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:57.063025  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:57.558307  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:57.558330  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:57.558343  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:57.558348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:57.561906  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:58.058234  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:58.058264  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:58.058275  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:58.058282  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:58.061797  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:04:58.062796  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:04:58.558459  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:58.558484  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:58.558496  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:58.558501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:58.562596  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:59.057578  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:59.057601  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:59.057610  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:59.057614  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:59.061733  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:04:59.557891  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:04:59.557915  388805 round_trippers.go:469] Request Headers:
	I0419 20:04:59.557921  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:04:59.557926  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:04:59.561788  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.057624  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:00.057651  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:00.057660  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:00.057664  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:00.061090  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.557862  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:00.557886  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:00.557895  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:00.557899  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:00.561563  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:00.562209  388805 node_ready.go:53] node "ha-423356-m02" has status "Ready":"False"
	I0419 20:05:01.058577  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:01.058596  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:01.058604  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:01.058608  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:01.061851  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:01.557745  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:01.557767  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:01.557775  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:01.557779  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:01.560837  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.058097  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.058118  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.058128  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.058133  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.061563  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.062383  388805 node_ready.go:49] node "ha-423356-m02" has status "Ready":"True"
	I0419 20:05:02.062402  388805 node_ready.go:38] duration metric: took 8.005005431s for node "ha-423356-m02" to be "Ready" ...
	I0419 20:05:02.062412  388805 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:05:02.062477  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:02.062487  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.062494  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.062501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.067129  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:02.074572  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.074680  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9td9f
	I0419 20:05:02.074691  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.074702  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.074708  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.077943  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.078761  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.078778  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.078785  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.078788  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.081115  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.081654  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.081672  388805 pod_ready.go:81] duration metric: took 7.074672ms for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.081684  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.081742  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rr7zk
	I0419 20:05:02.081751  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.081761  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.081766  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.087602  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:05:02.088318  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.088333  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.088343  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.088348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.090938  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.091355  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.091372  388805 pod_ready.go:81] duration metric: took 9.680689ms for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.091385  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.091453  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356
	I0419 20:05:02.091463  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.091473  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.091477  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.093843  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.094411  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:02.094426  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.094433  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.094436  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.096788  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.097353  388805 pod_ready.go:92] pod "etcd-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:02.097373  388805 pod_ready.go:81] duration metric: took 5.980968ms for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.097385  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:02.097442  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:02.097453  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.097461  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.097465  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.100214  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:02.101285  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.101305  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.101316  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.101321  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.104469  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.597964  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:02.597994  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.598006  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.598013  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.601870  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:02.602469  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:02.602488  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:02.602496  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:02.602500  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:02.605084  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:03.098149  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:03.098225  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.098254  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.098264  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.103071  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:03.103767  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:03.103788  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.103798  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.103803  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.107143  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:03.597583  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:03.597608  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.597616  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.597620  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.602124  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:03.602913  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:03.602928  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:03.602936  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:03.602939  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:03.605986  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:04.097593  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:04.097619  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.097626  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.097631  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.101805  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:04.102425  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:04.102443  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.102457  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.102462  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.105298  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:04.105926  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:04.598430  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:04.598463  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.598472  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.598482  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.601964  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:04.602882  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:04.602897  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:04.602905  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:04.602909  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:04.606297  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.098020  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:05.098044  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.098052  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.098057  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.106939  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:05:05.108186  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:05.108202  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.108209  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.108214  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.111314  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.598398  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:05.598427  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.598441  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.598447  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.601475  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:05.602428  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:05.602443  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:05.602453  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:05.602458  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:05.605156  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:06.098016  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:06.098038  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.098047  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.098051  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.102959  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.103638  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:06.103657  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.103667  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.103673  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.108625  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.109230  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:06.597675  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:06.597703  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.597711  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.597715  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.601802  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:06.602369  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:06.602385  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:06.602393  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:06.602397  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:06.605226  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:07.098240  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:07.098266  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.098274  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.098277  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.101799  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:07.102362  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:07.102379  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.102387  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.102392  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.105211  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:07.598377  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:07.598402  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.598411  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.598415  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.602777  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:07.603529  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:07.603550  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:07.603559  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:07.603564  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:07.606643  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.097602  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:08.097628  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.097637  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.097643  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.101330  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.102029  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:08.102050  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.102061  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.102069  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.104444  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:08.598110  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:08.598134  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.598143  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.598147  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.601785  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.602707  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:08.602723  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:08.602732  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:08.602736  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:08.605766  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:08.606378  388805 pod_ready.go:102] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"False"
	I0419 20:05:09.097650  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:09.097675  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.097683  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.097688  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.101223  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:09.101920  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:09.101942  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.101951  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.101957  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.104477  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:09.598304  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:09.598332  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.598345  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.598350  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.602825  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:09.603439  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:09.603460  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:09.603468  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:09.603473  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:09.606467  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.098265  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:05:10.098290  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.098296  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.098300  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.102294  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.102878  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.102899  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.102910  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.102915  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.106044  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.106652  388805 pod_ready.go:92] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.106671  388805 pod_ready.go:81] duration metric: took 8.009279266s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.106685  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.106735  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356
	I0419 20:05:10.106744  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.106751  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.106756  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.114415  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:05:10.115333  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.115352  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.115363  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.115367  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.118412  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.118998  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.119020  388805 pod_ready.go:81] duration metric: took 12.325135ms for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.119042  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.119105  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m02
	I0419 20:05:10.119116  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.119126  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.119131  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.121490  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.121978  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.121993  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.122002  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.122009  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.124229  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.124703  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.124722  388805 pod_ready.go:81] duration metric: took 5.671466ms for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.124734  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.124800  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:05:10.124810  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.124819  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.124824  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.127165  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.127840  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.127860  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.127870  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.127877  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.130135  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.130707  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.130722  388805 pod_ready.go:81] duration metric: took 5.979961ms for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.130733  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.130787  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:05:10.130797  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.130806  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.130810  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.133166  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.133771  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.133785  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.133794  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.133801  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.135891  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:05:10.136460  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.136480  388805 pod_ready.go:81] duration metric: took 5.738438ms for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.136492  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.298923  388805 request.go:629] Waited for 162.36358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:05:10.299007  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:05:10.299015  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.299024  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.299033  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.302431  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.498366  388805 request.go:629] Waited for 195.307939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.498439  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:10.498446  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.498455  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.498464  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.502509  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:10.503245  388805 pod_ready.go:92] pod "kube-proxy-chd2r" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.503262  388805 pod_ready.go:81] duration metric: took 366.759375ms for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.503273  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.698393  388805 request.go:629] Waited for 195.054471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:05:10.698464  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:05:10.698469  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.698475  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.698479  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.702111  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:10.898455  388805 request.go:629] Waited for 195.304748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.898545  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:10.898564  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:10.898575  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:10.898585  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:10.902958  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:10.903675  388805 pod_ready.go:92] pod "kube-proxy-d56ch" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:10.903694  388805 pod_ready.go:81] duration metric: took 400.412836ms for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:10.903704  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.098852  388805 request.go:629] Waited for 195.077426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:05:11.098934  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:05:11.098939  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.098947  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.098951  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.101972  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.299053  388805 request.go:629] Waited for 196.392329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:11.299129  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:05:11.299143  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.299155  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.299161  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.302455  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.303119  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:11.303138  388805 pod_ready.go:81] duration metric: took 399.428494ms for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.303149  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.499247  388805 request.go:629] Waited for 196.020056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:05:11.499350  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:05:11.499360  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.499371  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.499381  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.502985  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.699219  388805 request.go:629] Waited for 195.355641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:11.699290  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:05:11.699294  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.699302  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.699308  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.703264  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:11.705144  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:05:11.705171  388805 pod_ready.go:81] duration metric: took 402.010035ms for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:05:11.705186  388805 pod_ready.go:38] duration metric: took 9.642760124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:05:11.705212  388805 api_server.go:52] waiting for apiserver process to appear ...
	I0419 20:05:11.705283  388805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:05:11.722507  388805 api_server.go:72] duration metric: took 18.013656423s to wait for apiserver process to appear ...
	I0419 20:05:11.722535  388805 api_server.go:88] waiting for apiserver healthz status ...
	I0419 20:05:11.722558  388805 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0419 20:05:11.726921  388805 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0419 20:05:11.726985  388805 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0419 20:05:11.726993  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.727001  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.727009  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.728059  388805 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0419 20:05:11.728156  388805 api_server.go:141] control plane version: v1.30.0
	I0419 20:05:11.728175  388805 api_server.go:131] duration metric: took 5.633096ms to wait for apiserver health ...
	I0419 20:05:11.728185  388805 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 20:05:11.898611  388805 request.go:629] Waited for 170.337538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:11.898681  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:11.898689  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:11.898699  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:11.898704  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:11.905985  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:05:11.911708  388805 system_pods.go:59] 17 kube-system pods found
	I0419 20:05:11.911737  388805 system_pods.go:61] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:05:11.911742  388805 system_pods.go:61] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:05:11.911746  388805 system_pods.go:61] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:05:11.911749  388805 system_pods.go:61] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:05:11.911757  388805 system_pods.go:61] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:05:11.911762  388805 system_pods.go:61] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:05:11.911768  388805 system_pods.go:61] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:05:11.911773  388805 system_pods.go:61] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:05:11.911777  388805 system_pods.go:61] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:05:11.911785  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:05:11.911793  388805 system_pods.go:61] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:05:11.911798  388805 system_pods.go:61] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:05:11.911805  388805 system_pods.go:61] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:05:11.911814  388805 system_pods.go:61] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:05:11.911818  388805 system_pods.go:61] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:05:11.911826  388805 system_pods.go:61] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:05:11.911830  388805 system_pods.go:61] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:05:11.911836  388805 system_pods.go:74] duration metric: took 183.642064ms to wait for pod list to return data ...
	I0419 20:05:11.911846  388805 default_sa.go:34] waiting for default service account to be created ...
	I0419 20:05:12.098707  388805 request.go:629] Waited for 186.785435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:05:12.098781  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:05:12.098792  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.098802  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.098808  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.102336  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:05:12.102580  388805 default_sa.go:45] found service account: "default"
	I0419 20:05:12.102598  388805 default_sa.go:55] duration metric: took 190.7419ms for default service account to be created ...
	I0419 20:05:12.102614  388805 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 20:05:12.299071  388805 request.go:629] Waited for 196.355569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:12.299132  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:05:12.299136  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.299145  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.299148  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.304086  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:12.308418  388805 system_pods.go:86] 17 kube-system pods found
	I0419 20:05:12.308445  388805 system_pods.go:89] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:05:12.308450  388805 system_pods.go:89] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:05:12.308454  388805 system_pods.go:89] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:05:12.308459  388805 system_pods.go:89] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:05:12.308465  388805 system_pods.go:89] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:05:12.308471  388805 system_pods.go:89] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:05:12.308477  388805 system_pods.go:89] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:05:12.308483  388805 system_pods.go:89] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:05:12.308495  388805 system_pods.go:89] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:05:12.308502  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:05:12.308508  388805 system_pods.go:89] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:05:12.308520  388805 system_pods.go:89] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:05:12.308524  388805 system_pods.go:89] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:05:12.308528  388805 system_pods.go:89] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:05:12.308532  388805 system_pods.go:89] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:05:12.308537  388805 system_pods.go:89] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:05:12.308543  388805 system_pods.go:89] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:05:12.308550  388805 system_pods.go:126] duration metric: took 205.927011ms to wait for k8s-apps to be running ...
	I0419 20:05:12.308557  388805 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 20:05:12.308617  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:05:12.324115  388805 system_svc.go:56] duration metric: took 15.544914ms WaitForService to wait for kubelet
	I0419 20:05:12.324156  388805 kubeadm.go:576] duration metric: took 18.61530927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:05:12.324187  388805 node_conditions.go:102] verifying NodePressure condition ...
	I0419 20:05:12.498606  388805 request.go:629] Waited for 174.323457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0419 20:05:12.498667  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0419 20:05:12.498672  388805 round_trippers.go:469] Request Headers:
	I0419 20:05:12.498680  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:05:12.498684  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:05:12.503330  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:05:12.504298  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:05:12.504326  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:05:12.504347  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:05:12.504353  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:05:12.504360  388805 node_conditions.go:105] duration metric: took 180.166674ms to run NodePressure ...
	I0419 20:05:12.504376  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:05:12.504407  388805 start.go:254] writing updated cluster config ...
	I0419 20:05:12.509120  388805 out.go:177] 
	I0419 20:05:12.510974  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:12.511110  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:12.512994  388805 out.go:177] * Starting "ha-423356-m03" control-plane node in "ha-423356" cluster
	I0419 20:05:12.514233  388805 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:05:12.514273  388805 cache.go:56] Caching tarball of preloaded images
	I0419 20:05:12.514402  388805 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:05:12.514424  388805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:05:12.514540  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:12.514767  388805 start.go:360] acquireMachinesLock for ha-423356-m03: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:05:12.514822  388805 start.go:364] duration metric: took 30.511µs to acquireMachinesLock for "ha-423356-m03"
	I0419 20:05:12.514848  388805 start.go:93] Provisioning new machine with config: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:05:12.514987  388805 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0419 20:05:12.516593  388805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:05:12.516713  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:12.516764  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:12.531598  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0419 20:05:12.532029  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:12.532471  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:12.532492  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:12.532853  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:12.533062  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:12.533281  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:12.533484  388805 start.go:159] libmachine.API.Create for "ha-423356" (driver="kvm2")
	I0419 20:05:12.533518  388805 client.go:168] LocalClient.Create starting
	I0419 20:05:12.533555  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:05:12.533598  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:05:12.533621  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:05:12.533678  388805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:05:12.533698  388805 main.go:141] libmachine: Decoding PEM data...
	I0419 20:05:12.533709  388805 main.go:141] libmachine: Parsing certificate...
	I0419 20:05:12.533726  388805 main.go:141] libmachine: Running pre-create checks...
	I0419 20:05:12.533738  388805 main.go:141] libmachine: (ha-423356-m03) Calling .PreCreateCheck
	I0419 20:05:12.533917  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:12.534353  388805 main.go:141] libmachine: Creating machine...
	I0419 20:05:12.534372  388805 main.go:141] libmachine: (ha-423356-m03) Calling .Create
	I0419 20:05:12.534489  388805 main.go:141] libmachine: (ha-423356-m03) Creating KVM machine...
	I0419 20:05:12.535689  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found existing default KVM network
	I0419 20:05:12.535867  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found existing private KVM network mk-ha-423356
	I0419 20:05:12.536025  388805 main.go:141] libmachine: (ha-423356-m03) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 ...
	I0419 20:05:12.536049  388805 main.go:141] libmachine: (ha-423356-m03) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:05:12.536128  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.536016  389598 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:05:12.536197  388805 main.go:141] libmachine: (ha-423356-m03) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:05:12.781196  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.781088  389598 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa...
	I0419 20:05:12.955595  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.955479  389598 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/ha-423356-m03.rawdisk...
	I0419 20:05:12.955629  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Writing magic tar header
	I0419 20:05:12.955645  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Writing SSH key tar header
	I0419 20:05:12.955662  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:12.955625  389598 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 ...
	I0419 20:05:12.955793  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03
	I0419 20:05:12.955813  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:05:12.955826  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03 (perms=drwx------)
	I0419 20:05:12.955838  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:05:12.955845  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:05:12.955855  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:05:12.955864  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:05:12.955876  388805 main.go:141] libmachine: (ha-423356-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:05:12.955888  388805 main.go:141] libmachine: (ha-423356-m03) Creating domain...
	I0419 20:05:12.955900  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:05:12.955916  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:05:12.955927  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:05:12.955933  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:05:12.955938  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Checking permissions on dir: /home
	I0419 20:05:12.955949  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Skipping /home - not owner
	I0419 20:05:12.957016  388805 main.go:141] libmachine: (ha-423356-m03) define libvirt domain using xml: 
	I0419 20:05:12.957039  388805 main.go:141] libmachine: (ha-423356-m03) <domain type='kvm'>
	I0419 20:05:12.957049  388805 main.go:141] libmachine: (ha-423356-m03)   <name>ha-423356-m03</name>
	I0419 20:05:12.957060  388805 main.go:141] libmachine: (ha-423356-m03)   <memory unit='MiB'>2200</memory>
	I0419 20:05:12.957068  388805 main.go:141] libmachine: (ha-423356-m03)   <vcpu>2</vcpu>
	I0419 20:05:12.957077  388805 main.go:141] libmachine: (ha-423356-m03)   <features>
	I0419 20:05:12.957086  388805 main.go:141] libmachine: (ha-423356-m03)     <acpi/>
	I0419 20:05:12.957096  388805 main.go:141] libmachine: (ha-423356-m03)     <apic/>
	I0419 20:05:12.957103  388805 main.go:141] libmachine: (ha-423356-m03)     <pae/>
	I0419 20:05:12.957112  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957123  388805 main.go:141] libmachine: (ha-423356-m03)   </features>
	I0419 20:05:12.957133  388805 main.go:141] libmachine: (ha-423356-m03)   <cpu mode='host-passthrough'>
	I0419 20:05:12.957174  388805 main.go:141] libmachine: (ha-423356-m03)   
	I0419 20:05:12.957200  388805 main.go:141] libmachine: (ha-423356-m03)   </cpu>
	I0419 20:05:12.957214  388805 main.go:141] libmachine: (ha-423356-m03)   <os>
	I0419 20:05:12.957225  388805 main.go:141] libmachine: (ha-423356-m03)     <type>hvm</type>
	I0419 20:05:12.957237  388805 main.go:141] libmachine: (ha-423356-m03)     <boot dev='cdrom'/>
	I0419 20:05:12.957247  388805 main.go:141] libmachine: (ha-423356-m03)     <boot dev='hd'/>
	I0419 20:05:12.957263  388805 main.go:141] libmachine: (ha-423356-m03)     <bootmenu enable='no'/>
	I0419 20:05:12.957277  388805 main.go:141] libmachine: (ha-423356-m03)   </os>
	I0419 20:05:12.957316  388805 main.go:141] libmachine: (ha-423356-m03)   <devices>
	I0419 20:05:12.957341  388805 main.go:141] libmachine: (ha-423356-m03)     <disk type='file' device='cdrom'>
	I0419 20:05:12.957364  388805 main.go:141] libmachine: (ha-423356-m03)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/boot2docker.iso'/>
	I0419 20:05:12.957381  388805 main.go:141] libmachine: (ha-423356-m03)       <target dev='hdc' bus='scsi'/>
	I0419 20:05:12.957395  388805 main.go:141] libmachine: (ha-423356-m03)       <readonly/>
	I0419 20:05:12.957405  388805 main.go:141] libmachine: (ha-423356-m03)     </disk>
	I0419 20:05:12.957418  388805 main.go:141] libmachine: (ha-423356-m03)     <disk type='file' device='disk'>
	I0419 20:05:12.957430  388805 main.go:141] libmachine: (ha-423356-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:05:12.957447  388805 main.go:141] libmachine: (ha-423356-m03)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/ha-423356-m03.rawdisk'/>
	I0419 20:05:12.957468  388805 main.go:141] libmachine: (ha-423356-m03)       <target dev='hda' bus='virtio'/>
	I0419 20:05:12.957487  388805 main.go:141] libmachine: (ha-423356-m03)     </disk>
	I0419 20:05:12.957502  388805 main.go:141] libmachine: (ha-423356-m03)     <interface type='network'>
	I0419 20:05:12.957515  388805 main.go:141] libmachine: (ha-423356-m03)       <source network='mk-ha-423356'/>
	I0419 20:05:12.957523  388805 main.go:141] libmachine: (ha-423356-m03)       <model type='virtio'/>
	I0419 20:05:12.957534  388805 main.go:141] libmachine: (ha-423356-m03)     </interface>
	I0419 20:05:12.957548  388805 main.go:141] libmachine: (ha-423356-m03)     <interface type='network'>
	I0419 20:05:12.957559  388805 main.go:141] libmachine: (ha-423356-m03)       <source network='default'/>
	I0419 20:05:12.957573  388805 main.go:141] libmachine: (ha-423356-m03)       <model type='virtio'/>
	I0419 20:05:12.957584  388805 main.go:141] libmachine: (ha-423356-m03)     </interface>
	I0419 20:05:12.957594  388805 main.go:141] libmachine: (ha-423356-m03)     <serial type='pty'>
	I0419 20:05:12.957604  388805 main.go:141] libmachine: (ha-423356-m03)       <target port='0'/>
	I0419 20:05:12.957623  388805 main.go:141] libmachine: (ha-423356-m03)     </serial>
	I0419 20:05:12.957640  388805 main.go:141] libmachine: (ha-423356-m03)     <console type='pty'>
	I0419 20:05:12.957657  388805 main.go:141] libmachine: (ha-423356-m03)       <target type='serial' port='0'/>
	I0419 20:05:12.957667  388805 main.go:141] libmachine: (ha-423356-m03)     </console>
	I0419 20:05:12.957680  388805 main.go:141] libmachine: (ha-423356-m03)     <rng model='virtio'>
	I0419 20:05:12.957692  388805 main.go:141] libmachine: (ha-423356-m03)       <backend model='random'>/dev/random</backend>
	I0419 20:05:12.957701  388805 main.go:141] libmachine: (ha-423356-m03)     </rng>
	I0419 20:05:12.957715  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957730  388805 main.go:141] libmachine: (ha-423356-m03)     
	I0419 20:05:12.957744  388805 main.go:141] libmachine: (ha-423356-m03)   </devices>
	I0419 20:05:12.957757  388805 main.go:141] libmachine: (ha-423356-m03) </domain>
	I0419 20:05:12.957763  388805 main.go:141] libmachine: (ha-423356-m03) 
	I0419 20:05:12.965531  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:7f:8d:21 in network default
	I0419 20:05:12.966109  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:12.966125  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring networks are active...
	I0419 20:05:12.966850  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring network default is active
	I0419 20:05:12.967203  388805 main.go:141] libmachine: (ha-423356-m03) Ensuring network mk-ha-423356 is active
	I0419 20:05:12.967578  388805 main.go:141] libmachine: (ha-423356-m03) Getting domain xml...
	I0419 20:05:12.968345  388805 main.go:141] libmachine: (ha-423356-m03) Creating domain...
	I0419 20:05:14.183762  388805 main.go:141] libmachine: (ha-423356-m03) Waiting to get IP...
	I0419 20:05:14.184701  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.185167  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.185234  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.185160  389598 retry.go:31] will retry after 283.969012ms: waiting for machine to come up
	I0419 20:05:14.470670  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.470995  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.471029  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.470977  389598 retry.go:31] will retry after 384.20274ms: waiting for machine to come up
	I0419 20:05:14.856501  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:14.856943  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:14.856971  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:14.856902  389598 retry.go:31] will retry after 483.55961ms: waiting for machine to come up
	I0419 20:05:15.341765  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:15.342311  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:15.342342  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:15.342260  389598 retry.go:31] will retry after 489.203595ms: waiting for machine to come up
	I0419 20:05:15.832901  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:15.833411  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:15.833449  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:15.833362  389598 retry.go:31] will retry after 553.302739ms: waiting for machine to come up
	I0419 20:05:16.387965  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:16.388388  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:16.388422  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:16.388323  389598 retry.go:31] will retry after 809.088382ms: waiting for machine to come up
	I0419 20:05:17.198680  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:17.199231  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:17.199267  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:17.199167  389598 retry.go:31] will retry after 748.965459ms: waiting for machine to come up
	I0419 20:05:17.950319  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:17.950812  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:17.950841  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:17.950751  389598 retry.go:31] will retry after 1.000266671s: waiting for machine to come up
	I0419 20:05:18.952983  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:18.953501  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:18.953533  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:18.953444  389598 retry.go:31] will retry after 1.410601616s: waiting for machine to come up
	I0419 20:05:20.365780  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:20.366286  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:20.366306  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:20.366237  389598 retry.go:31] will retry after 1.859485208s: waiting for machine to come up
	I0419 20:05:22.227079  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:22.227659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:22.227695  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:22.227587  389598 retry.go:31] will retry after 2.263798453s: waiting for machine to come up
	I0419 20:05:24.492659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:24.493053  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:24.493085  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:24.492990  389598 retry.go:31] will retry after 3.471867165s: waiting for machine to come up
	I0419 20:05:27.966230  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:27.966720  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:27.966748  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:27.966678  389598 retry.go:31] will retry after 3.751116138s: waiting for machine to come up
	I0419 20:05:31.719321  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:31.719645  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find current IP address of domain ha-423356-m03 in network mk-ha-423356
	I0419 20:05:31.719670  388805 main.go:141] libmachine: (ha-423356-m03) DBG | I0419 20:05:31.719587  389598 retry.go:31] will retry after 5.08434409s: waiting for machine to come up
	I0419 20:05:36.805700  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.806130  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has current primary IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.806145  388805 main.go:141] libmachine: (ha-423356-m03) Found IP for machine: 192.168.39.111
	I0419 20:05:36.806165  388805 main.go:141] libmachine: (ha-423356-m03) Reserving static IP address...
	I0419 20:05:36.806532  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find host DHCP lease matching {name: "ha-423356-m03", mac: "52:54:00:fc:cf:fe", ip: "192.168.39.111"} in network mk-ha-423356
	I0419 20:05:36.880933  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Getting to WaitForSSH function...
	I0419 20:05:36.880994  388805 main.go:141] libmachine: (ha-423356-m03) Reserved static IP address: 192.168.39.111
	I0419 20:05:36.881010  388805 main.go:141] libmachine: (ha-423356-m03) Waiting for SSH to be available...
	I0419 20:05:36.883767  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:36.884211  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356
	I0419 20:05:36.884235  388805 main.go:141] libmachine: (ha-423356-m03) DBG | unable to find defined IP address of network mk-ha-423356 interface with MAC address 52:54:00:fc:cf:fe
	I0419 20:05:36.884394  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH client type: external
	I0419 20:05:36.884422  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa (-rw-------)
	I0419 20:05:36.884453  388805 main.go:141] libmachine: (ha-423356-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:05:36.884469  388805 main.go:141] libmachine: (ha-423356-m03) DBG | About to run SSH command:
	I0419 20:05:36.884490  388805 main.go:141] libmachine: (ha-423356-m03) DBG | exit 0
	I0419 20:05:36.888311  388805 main.go:141] libmachine: (ha-423356-m03) DBG | SSH cmd err, output: exit status 255: 
	I0419 20:05:36.888330  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0419 20:05:36.888338  388805 main.go:141] libmachine: (ha-423356-m03) DBG | command : exit 0
	I0419 20:05:36.888348  388805 main.go:141] libmachine: (ha-423356-m03) DBG | err     : exit status 255
	I0419 20:05:36.888355  388805 main.go:141] libmachine: (ha-423356-m03) DBG | output  : 
	I0419 20:05:39.890772  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Getting to WaitForSSH function...
	I0419 20:05:39.893113  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:39.893535  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:39.893565  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:39.893673  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH client type: external
	I0419 20:05:39.893695  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa (-rw-------)
	I0419 20:05:39.893730  388805 main.go:141] libmachine: (ha-423356-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:05:39.893756  388805 main.go:141] libmachine: (ha-423356-m03) DBG | About to run SSH command:
	I0419 20:05:39.893781  388805 main.go:141] libmachine: (ha-423356-m03) DBG | exit 0
	I0419 20:05:40.020539  388805 main.go:141] libmachine: (ha-423356-m03) DBG | SSH cmd err, output: <nil>: 
	I0419 20:05:40.020828  388805 main.go:141] libmachine: (ha-423356-m03) KVM machine creation complete!
	I0419 20:05:40.021183  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:40.021776  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:40.021991  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:40.022177  388805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:05:40.022201  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:05:40.023490  388805 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:05:40.023503  388805 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:05:40.023510  388805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:05:40.023515  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.025615  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.026055  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.026089  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.026197  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.026375  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.026511  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.026639  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.026792  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.027063  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.027083  388805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:05:40.140602  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:05:40.140626  388805 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:05:40.140652  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.143644  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.144040  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.144070  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.144270  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.144501  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.144696  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.144866  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.145067  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.145296  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.145312  388805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:05:40.257769  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:05:40.257847  388805 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:05:40.257861  388805 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:05:40.257872  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.258170  388805 buildroot.go:166] provisioning hostname "ha-423356-m03"
	I0419 20:05:40.258202  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.258439  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.260916  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.261286  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.261316  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.261426  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.261619  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.261766  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.261941  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.262110  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.262280  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.262291  388805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356-m03 && echo "ha-423356-m03" | sudo tee /etc/hostname
	I0419 20:05:40.388355  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356-m03
	
	I0419 20:05:40.388397  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.391284  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.391629  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.391666  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.391858  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.392071  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.392226  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.392363  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.392509  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.392738  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.392756  388805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:05:40.514848  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:05:40.514881  388805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:05:40.514903  388805 buildroot.go:174] setting up certificates
	I0419 20:05:40.514914  388805 provision.go:84] configureAuth start
	I0419 20:05:40.514932  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetMachineName
	I0419 20:05:40.515217  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:40.518036  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.518463  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.518504  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.518700  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.520940  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.521297  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.521326  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.521493  388805 provision.go:143] copyHostCerts
	I0419 20:05:40.521530  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:05:40.521571  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:05:40.521583  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:05:40.521665  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:05:40.521767  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:05:40.521795  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:05:40.521801  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:05:40.521838  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:05:40.521900  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:05:40.521925  388805 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:05:40.521937  388805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:05:40.521978  388805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:05:40.522058  388805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356-m03 san=[127.0.0.1 192.168.39.111 ha-423356-m03 localhost minikube]
	I0419 20:05:40.787540  388805 provision.go:177] copyRemoteCerts
	I0419 20:05:40.787603  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:05:40.787628  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.790222  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.790608  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.790640  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.790776  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.791016  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.791161  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.791351  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:40.879307  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:05:40.879377  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:05:40.906826  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:05:40.906923  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0419 20:05:40.934396  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:05:40.934473  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:05:40.960610  388805 provision.go:87] duration metric: took 445.681947ms to configureAuth
	I0419 20:05:40.960655  388805 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:05:40.960860  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:40.960963  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:40.963699  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.964104  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:40.964130  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:40.964298  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:40.964499  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.964684  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:40.964821  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:40.965009  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:40.965213  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:40.965235  388805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:05:41.248937  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:05:41.248972  388805 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:05:41.248981  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetURL
	I0419 20:05:41.250633  388805 main.go:141] libmachine: (ha-423356-m03) DBG | Using libvirt version 6000000
	I0419 20:05:41.252996  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.253382  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.253411  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.253641  388805 main.go:141] libmachine: Docker is up and running!
	I0419 20:05:41.253660  388805 main.go:141] libmachine: Reticulating splines...
	I0419 20:05:41.253668  388805 client.go:171] duration metric: took 28.720141499s to LocalClient.Create
	I0419 20:05:41.253695  388805 start.go:167] duration metric: took 28.7202136s to libmachine.API.Create "ha-423356"
	I0419 20:05:41.253705  388805 start.go:293] postStartSetup for "ha-423356-m03" (driver="kvm2")
	I0419 20:05:41.253715  388805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:05:41.253744  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.253968  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:05:41.253998  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.256313  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.256601  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.256649  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.256901  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.257078  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.257252  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.257418  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.343433  388805 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:05:41.348557  388805 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:05:41.348584  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:05:41.348686  388805 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:05:41.348782  388805 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:05:41.348800  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:05:41.348912  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:05:41.359212  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:05:41.388584  388805 start.go:296] duration metric: took 134.857661ms for postStartSetup
	I0419 20:05:41.388680  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetConfigRaw
	I0419 20:05:41.389390  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:41.391939  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.392250  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.392283  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.392580  388805 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:05:41.392808  388805 start.go:128] duration metric: took 28.877809223s to createHost
	I0419 20:05:41.392835  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.395173  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.395609  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.395637  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.395781  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.395959  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.396115  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.396248  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.396443  388805 main.go:141] libmachine: Using SSH client type: native
	I0419 20:05:41.396666  388805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0419 20:05:41.396683  388805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:05:41.509662  388805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557141.485721310
	
	I0419 20:05:41.509689  388805 fix.go:216] guest clock: 1713557141.485721310
	I0419 20:05:41.509699  388805 fix.go:229] Guest: 2024-04-19 20:05:41.48572131 +0000 UTC Remote: 2024-04-19 20:05:41.392822689 +0000 UTC m=+158.602347846 (delta=92.898621ms)
	I0419 20:05:41.509721  388805 fix.go:200] guest clock delta is within tolerance: 92.898621ms
	I0419 20:05:41.509728  388805 start.go:83] releasing machines lock for "ha-423356-m03", held for 28.994892092s
	I0419 20:05:41.509750  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.510026  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:41.513044  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.513458  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.513494  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.515947  388805 out.go:177] * Found network options:
	I0419 20:05:41.517336  388805 out.go:177]   - NO_PROXY=192.168.39.7,192.168.39.121
	W0419 20:05:41.518516  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 20:05:41.518535  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:05:41.518551  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519149  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519366  388805 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:05:41.519457  388805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:05:41.519486  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	W0419 20:05:41.519588  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	W0419 20:05:41.519613  388805 proxy.go:119] fail to check proxy env: Error ip not in block
	I0419 20:05:41.519696  388805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:05:41.519721  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:05:41.522051  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522322  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522391  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.522417  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522571  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.522659  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:41.522688  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:41.522740  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.522831  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:05:41.522911  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.522993  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:05:41.523052  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.523122  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:05:41.523253  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:05:41.764601  388805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:05:41.770852  388805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:05:41.770930  388805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:05:41.788130  388805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:05:41.788155  388805 start.go:494] detecting cgroup driver to use...
	I0419 20:05:41.788220  388805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:05:41.804494  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:05:41.819899  388805 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:05:41.819979  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:05:41.835050  388805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:05:41.849817  388805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:05:41.977220  388805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:05:42.153349  388805 docker.go:233] disabling docker service ...
	I0419 20:05:42.153424  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:05:42.170662  388805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:05:42.185440  388805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:05:42.306766  388805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:05:42.436811  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:05:42.452596  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:05:42.471917  388805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:05:42.471990  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.483281  388805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:05:42.483354  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.494602  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.507226  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.520991  388805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:05:42.533385  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.545535  388805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.565088  388805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:05:42.577308  388805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:05:42.589447  388805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:05:42.589517  388805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:05:42.603436  388805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:05:42.614245  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:05:42.738267  388805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:05:42.882243  388805 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:05:42.882336  388805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:05:42.887517  388805 start.go:562] Will wait 60s for crictl version
	I0419 20:05:42.887568  388805 ssh_runner.go:195] Run: which crictl
	I0419 20:05:42.891669  388805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:05:42.933597  388805 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:05:42.933682  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:05:42.964730  388805 ssh_runner.go:195] Run: crio --version
	I0419 20:05:42.996171  388805 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:05:42.997531  388805 out.go:177]   - env NO_PROXY=192.168.39.7
	I0419 20:05:42.998808  388805 out.go:177]   - env NO_PROXY=192.168.39.7,192.168.39.121
	I0419 20:05:42.999904  388805 main.go:141] libmachine: (ha-423356-m03) Calling .GetIP
	I0419 20:05:43.003049  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:43.003525  388805 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:05:43.003550  388805 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:05:43.003795  388805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:05:43.008264  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:05:43.020924  388805 mustload.go:65] Loading cluster: ha-423356
	I0419 20:05:43.021170  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:05:43.021441  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:43.021492  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:43.036579  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0419 20:05:43.037129  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:43.037612  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:43.037634  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:43.037966  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:43.038140  388805 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:05:43.039724  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:05:43.040044  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:43.040085  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:43.055254  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0419 20:05:43.055677  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:43.056206  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:43.056232  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:43.056561  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:43.056791  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:05:43.056985  388805 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.111
	I0419 20:05:43.056998  388805 certs.go:194] generating shared ca certs ...
	I0419 20:05:43.057017  388805 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.057176  388805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:05:43.057235  388805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:05:43.057251  388805 certs.go:256] generating profile certs ...
	I0419 20:05:43.057361  388805 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:05:43.057396  388805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18
	I0419 20:05:43.057421  388805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.111 192.168.39.254]
	I0419 20:05:43.213129  388805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 ...
	I0419 20:05:43.213164  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18: {Name:mk07affa39edd4b79403c8ce6388763e4d72916b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.213357  388805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18 ...
	I0419 20:05:43.213375  388805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18: {Name:mk825efea1197d117993825e19ca076825193566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:05:43.213478  388805 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.10968f18 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:05:43.213618  388805 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.10968f18 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:05:43.213747  388805 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:05:43.213764  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:05:43.213777  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:05:43.213790  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:05:43.213802  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:05:43.213815  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:05:43.213827  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:05:43.213838  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:05:43.213850  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:05:43.213908  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:05:43.213938  388805 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:05:43.213948  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:05:43.213969  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:05:43.213990  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:05:43.214011  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:05:43.214046  388805 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:05:43.214071  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.214085  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.214097  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.214138  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:05:43.217578  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:43.217973  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:05:43.217999  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:43.218239  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:05:43.218460  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:05:43.218629  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:05:43.218772  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:05:43.297003  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0419 20:05:43.306332  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0419 20:05:43.319708  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0419 20:05:43.324580  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0419 20:05:43.337027  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0419 20:05:43.341928  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0419 20:05:43.356449  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0419 20:05:43.360966  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0419 20:05:43.377564  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0419 20:05:43.382865  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0419 20:05:43.403933  388805 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0419 20:05:43.409620  388805 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0419 20:05:43.421889  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:05:43.450062  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:05:43.476554  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:05:43.503266  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:05:43.529298  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0419 20:05:43.561098  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:05:43.589306  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:05:43.615565  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:05:43.643151  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:05:43.671144  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:05:43.701196  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:05:43.726835  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0419 20:05:43.744776  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0419 20:05:43.763639  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0419 20:05:43.781653  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0419 20:05:43.800969  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0419 20:05:43.818673  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0419 20:05:43.836373  388805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0419 20:05:43.853934  388805 ssh_runner.go:195] Run: openssl version
	I0419 20:05:43.859990  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:05:43.870978  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.875669  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.875747  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:05:43.882009  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:05:43.893299  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:05:43.908975  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.916018  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.916089  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:05:43.922820  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:05:43.934423  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:05:43.946418  388805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.951554  388805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.951609  388805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:05:43.958159  388805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:05:43.970155  388805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:05:43.974749  388805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:05:43.974815  388805 kubeadm.go:928] updating node {m03 192.168.39.111 8443 v1.30.0 crio true true} ...
	I0419 20:05:43.974921  388805 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:05:43.974962  388805 kube-vip.go:111] generating kube-vip config ...
	I0419 20:05:43.975012  388805 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:05:43.992465  388805 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:05:43.992534  388805 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:05:43.992581  388805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:05:44.004253  388805 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0419 20:05:44.004321  388805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0419 20:05:44.015815  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0419 20:05:44.015850  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:05:44.015866  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0419 20:05:44.015866  388805 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0419 20:05:44.015891  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:05:44.015912  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:05:44.015922  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0419 20:05:44.015962  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0419 20:05:44.034224  388805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:05:44.034240  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0419 20:05:44.034271  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0419 20:05:44.034314  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0419 20:05:44.034333  388805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0419 20:05:44.034341  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0419 20:05:44.065796  388805 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0419 20:05:44.065843  388805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0419 20:05:45.042494  388805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0419 20:05:45.054258  388805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0419 20:05:45.072558  388805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:05:45.091171  388805 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:05:45.108420  388805 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:05:45.112514  388805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:05:45.125614  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:05:45.264021  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:05:45.285069  388805 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:05:45.285544  388805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:05:45.285591  388805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:05:45.301995  388805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0419 20:05:45.302584  388805 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:05:45.303129  388805 main.go:141] libmachine: Using API Version  1
	I0419 20:05:45.303164  388805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:05:45.303572  388805 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:05:45.303824  388805 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:05:45.303994  388805 start.go:316] joinCluster: &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:05:45.304168  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0419 20:05:45.304190  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:05:45.307563  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:45.307996  388805 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:05:45.308026  388805 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:05:45.308208  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:05:45.308426  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:05:45.308597  388805 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:05:45.308779  388805 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:05:45.683924  388805 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:05:45.683974  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2828w.mz4l9arxpw0m036n --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0419 20:06:13.922617  388805 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b2828w.mz4l9arxpw0m036n --discovery-token-ca-cert-hash sha256:673e0ed329d6cd4989b895691150381308271ce12bf5524f53861537190c10ea --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-423356-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (28.238613784s)
	I0419 20:06:13.922666  388805 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0419 20:06:14.536987  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-423356-m03 minikube.k8s.io/updated_at=2024_04_19T20_06_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=ha-423356 minikube.k8s.io/primary=false
	I0419 20:06:14.688778  388805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-423356-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0419 20:06:14.812081  388805 start.go:318] duration metric: took 29.508082602s to joinCluster
	I0419 20:06:14.812171  388805 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:06:14.814021  388805 out.go:177] * Verifying Kubernetes components...
	I0419 20:06:14.812485  388805 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:06:14.815801  388805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:06:15.101973  388805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:06:15.136788  388805 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:06:15.137176  388805 kapi.go:59] client config for ha-423356: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key", CAFile:"/home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0419 20:06:15.137278  388805 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0419 20:06:15.137619  388805 node_ready.go:35] waiting up to 6m0s for node "ha-423356-m03" to be "Ready" ...
	I0419 20:06:15.137736  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:15.137750  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:15.137762  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:15.137768  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:15.142385  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:15.637991  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:15.638020  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:15.638031  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:15.638037  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:15.641820  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:16.137914  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:16.137942  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:16.137961  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:16.137966  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:16.143360  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:16.638367  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:16.638390  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:16.638400  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:16.638406  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:16.641794  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:17.138709  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:17.138738  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:17.138750  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:17.138755  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:17.142398  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:17.143298  388805 node_ready.go:53] node "ha-423356-m03" has status "Ready":"False"
	I0419 20:06:17.637854  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:17.637888  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:17.637900  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:17.637906  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:17.641734  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:18.137906  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:18.137932  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:18.137941  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:18.137945  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:18.141846  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:18.638458  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:18.638484  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:18.638492  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:18.638497  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:18.642604  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:19.138705  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:19.138734  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:19.138746  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:19.138756  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:19.142297  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:19.143592  388805 node_ready.go:53] node "ha-423356-m03" has status "Ready":"False"
	I0419 20:06:19.638522  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:19.638557  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:19.638568  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:19.638573  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:19.643536  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:20.138147  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:20.138171  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:20.138181  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:20.138190  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:20.142289  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:20.637970  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:20.637994  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:20.638003  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:20.638007  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:20.642092  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.137947  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:21.137985  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.137998  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.138003  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.141854  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.638775  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:21.638801  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.638811  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.638816  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.642170  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.642763  388805 node_ready.go:49] node "ha-423356-m03" has status "Ready":"True"
	I0419 20:06:21.642783  388805 node_ready.go:38] duration metric: took 6.505137736s for node "ha-423356-m03" to be "Ready" ...
	I0419 20:06:21.642794  388805 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:06:21.642866  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:21.642906  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.642921  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.642930  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.650185  388805 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0419 20:06:21.658714  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.658834  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9td9f
	I0419 20:06:21.658848  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.658858  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.658867  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.663279  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.663983  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.664004  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.664013  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.664019  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.667036  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.667836  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.667852  388805 pod_ready.go:81] duration metric: took 9.105443ms for pod "coredns-7db6d8ff4d-9td9f" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.667871  388805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.667922  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rr7zk
	I0419 20:06:21.667930  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.667937  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.667940  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.670825  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:21.671552  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.671570  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.671581  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.671589  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.675714  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:21.676866  388805 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.676883  388805 pod_ready.go:81] duration metric: took 9.003622ms for pod "coredns-7db6d8ff4d-rr7zk" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.676900  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.676961  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356
	I0419 20:06:21.676973  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.676981  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.676988  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.680352  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.681296  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:21.681310  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.681315  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.681319  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.683782  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:21.684331  388805 pod_ready.go:92] pod "etcd-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.684350  388805 pod_ready.go:81] duration metric: took 7.441096ms for pod "etcd-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.684392  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.684465  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m02
	I0419 20:06:21.684475  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.684484  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.684501  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.688207  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.689138  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:21.689154  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.689161  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.689169  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.692580  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:21.696141  388805 pod_ready.go:92] pod "etcd-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:21.696164  388805 pod_ready.go:81] duration metric: took 11.76079ms for pod "etcd-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.696176  388805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:21.838864  388805 request.go:629] Waited for 142.603813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:21.838942  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:21.838950  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:21.838961  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:21.838972  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:21.842639  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.039398  388805 request.go:629] Waited for 195.81871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.039463  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.039468  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.039476  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.039480  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.043227  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.239082  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:22.239105  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.239116  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.239123  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.244515  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:22.439059  388805 request.go:629] Waited for 193.350851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.439118  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.439122  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.439130  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.439135  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.442870  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.697216  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:22.697243  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.697251  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.697257  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.700691  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:22.839508  388805 request.go:629] Waited for 138.045161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.839568  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:22.839573  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:22.839581  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:22.839586  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:22.843327  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.196819  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:23.196848  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.196858  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.196864  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.204908  388805 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0419 20:06:23.239213  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:23.239266  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.239279  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.239289  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.243211  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.696383  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:23.696410  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.696419  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.696424  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.699959  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.700815  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:23.700831  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:23.700838  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:23.700844  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:23.703902  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:23.704480  388805 pod_ready.go:102] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"False"
	I0419 20:06:24.196843  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:24.196867  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.196876  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.196881  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.201207  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:24.202431  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:24.202459  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.202471  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.202477  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.205617  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:24.696695  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:24.696722  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.696734  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.696743  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.700012  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:24.700679  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:24.700700  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:24.700712  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:24.700715  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:24.703606  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:25.197107  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:25.197133  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.197140  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.197144  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.201265  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:25.202296  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:25.202317  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.202326  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.202329  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.205409  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:25.696947  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:25.696973  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.696979  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.696983  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.701210  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:25.702108  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:25.702125  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:25.702132  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:25.702136  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:25.705573  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:25.706268  388805 pod_ready.go:102] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"False"
	I0419 20:06:26.197306  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:26.197334  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.197344  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.197348  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.200859  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:26.201526  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:26.201540  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.201546  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.201550  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.204378  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:26.697343  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:26.697369  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.697380  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.697386  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.702462  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:26.703176  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:26.703190  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:26.703199  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:26.703204  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:26.706216  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:27.197350  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-423356-m03
	I0419 20:06:27.197400  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.197419  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.197431  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.203530  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:27.204368  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.204390  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.204402  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.204408  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.208863  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:27.209532  388805 pod_ready.go:92] pod "etcd-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.209552  388805 pod_ready.go:81] duration metric: took 5.513367498s for pod "etcd-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.209575  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.209640  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356
	I0419 20:06:27.209650  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.209660  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.209668  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.212857  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.213768  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:27.213787  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.213798  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.213803  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.218187  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:27.218780  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.218807  388805 pod_ready.go:81] duration metric: took 9.222056ms for pod "kube-apiserver-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.218820  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.218989  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m02
	I0419 20:06:27.219011  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.219019  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.219024  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.222546  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.239149  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:27.239168  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.239181  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.239188  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.242216  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.242923  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:27.242958  388805 pod_ready.go:81] duration metric: took 24.128714ms for pod "kube-apiserver-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.242972  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:27.439427  388805 request.go:629] Waited for 196.369758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.439530  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.439536  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.439544  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.439550  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.442868  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.639233  388805 request.go:629] Waited for 195.590331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.639302  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:27.639308  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.639315  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.639320  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.642879  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:27.838994  388805 request.go:629] Waited for 95.280816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.839077  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:27.839086  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:27.839095  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:27.839099  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:27.842612  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.038842  388805 request.go:629] Waited for 195.230671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.038914  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.038926  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.038938  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.038950  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.042660  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.243790  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:28.243819  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.243830  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.243834  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.247999  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:28.439103  388805 request.go:629] Waited for 190.38294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.439191  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.439200  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.439214  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.439225  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.442595  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:28.743724  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:28.743751  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.743762  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.743773  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.747830  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:28.839184  388805 request.go:629] Waited for 90.19036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.839244  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:28.839249  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:28.839259  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:28.839265  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:28.843324  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:29.243785  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423356-m03
	I0419 20:06:29.243830  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.243840  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.243858  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.247970  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:29.248804  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:29.248821  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.248831  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.248839  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.251790  388805 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0419 20:06:29.252447  388805 pod_ready.go:92] pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:29.252466  388805 pod_ready.go:81] duration metric: took 2.009486005s for pod "kube-apiserver-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.252480  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.438857  388805 request.go:629] Waited for 186.304892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:06:29.438943  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356
	I0419 20:06:29.438951  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.438961  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.438966  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.442496  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:29.639550  388805 request.go:629] Waited for 196.413304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:29.639614  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:29.639620  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.639628  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.639634  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.642803  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:29.643484  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:29.643504  388805 pod_ready.go:81] duration metric: took 391.012562ms for pod "kube-controller-manager-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.643514  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:29.839701  388805 request.go:629] Waited for 196.078245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:06:29.839774  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m02
	I0419 20:06:29.839782  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:29.839794  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:29.839802  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:29.842904  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.039132  388805 request.go:629] Waited for 195.278783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:30.039190  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:30.039195  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.039203  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.039216  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.042868  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.043570  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:30.043593  388805 pod_ready.go:81] duration metric: took 400.07099ms for pod "kube-controller-manager-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:30.043611  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:30.239724  388805 request.go:629] Waited for 196.012378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.239822  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.239834  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.239845  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.239853  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.243877  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:30.439197  388805 request.go:629] Waited for 194.264801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.439261  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.439267  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.439277  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.439284  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.442927  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:30.639073  388805 request.go:629] Waited for 94.293643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.639157  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:30.639165  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.639173  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.639180  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.643288  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:30.839410  388805 request.go:629] Waited for 195.352513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.839470  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:30.839475  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:30.839483  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:30.839487  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:30.842889  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.044687  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:31.044711  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.044720  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.044726  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.048666  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.238890  388805 request.go:629] Waited for 189.309056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.238973  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.238981  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.238992  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.239001  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.242875  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.544757  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:31.544785  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.544795  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.544799  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.548274  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:31.639566  388805 request.go:629] Waited for 90.266216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.639631  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:31.639636  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:31.639643  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:31.639647  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:31.649493  388805 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0419 20:06:32.044448  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423356-m03
	I0419 20:06:32.044474  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.044485  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.044491  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.048414  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.049199  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:32.049217  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.049225  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.049229  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.052289  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.052919  388805 pod_ready.go:92] pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.052947  388805 pod_ready.go:81] duration metric: took 2.009315041s for pod "kube-controller-manager-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.052961  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.239258  388805 request.go:629] Waited for 186.223997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:06:32.239366  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chd2r
	I0419 20:06:32.239372  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.239380  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.239388  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.243705  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:32.438843  388805 request.go:629] Waited for 194.326978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:32.438925  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:32.438930  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.438938  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.438943  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.442594  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.443228  388805 pod_ready.go:92] pod "kube-proxy-chd2r" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.443248  388805 pod_ready.go:81] duration metric: took 390.279901ms for pod "kube-proxy-chd2r" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.443259  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.639735  388805 request.go:629] Waited for 196.372073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:06:32.639802  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d56ch
	I0419 20:06:32.639807  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.639815  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.639820  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.643622  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.839386  388805 request.go:629] Waited for 194.972107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:32.839476  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:32.839485  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:32.839500  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:32.839508  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:32.842947  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:32.843684  388805 pod_ready.go:92] pod "kube-proxy-d56ch" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:32.843704  388805 pod_ready.go:81] duration metric: took 400.438188ms for pod "kube-proxy-d56ch" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:32.843713  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sr4gd" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.039392  388805 request.go:629] Waited for 195.577301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sr4gd
	I0419 20:06:33.039484  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sr4gd
	I0419 20:06:33.039491  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.039502  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.039512  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.043062  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.239750  388805 request.go:629] Waited for 195.841277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:33.239848  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:33.239859  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.239871  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.239882  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.243050  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.243948  388805 pod_ready.go:92] pod "kube-proxy-sr4gd" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:33.243971  388805 pod_ready.go:81] duration metric: took 400.251464ms for pod "kube-proxy-sr4gd" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.243984  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.439149  388805 request.go:629] Waited for 195.06327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:06:33.439223  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356
	I0419 20:06:33.439232  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.439243  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.439251  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.443391  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:33.639511  388805 request.go:629] Waited for 195.305289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:33.639579  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356
	I0419 20:06:33.639584  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.639592  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.639600  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.642892  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:33.643628  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:33.643652  388805 pod_ready.go:81] duration metric: took 399.660005ms for pod "kube-scheduler-ha-423356" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.643665  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:33.839721  388805 request.go:629] Waited for 195.952469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:06:33.839791  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m02
	I0419 20:06:33.839796  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:33.839804  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:33.839808  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:33.843854  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:34.039404  388805 request.go:629] Waited for 194.381115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:34.039466  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m02
	I0419 20:06:34.039473  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.039484  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.039499  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.043062  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.043739  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:34.043758  388805 pod_ready.go:81] duration metric: took 400.085128ms for pod "kube-scheduler-ha-423356-m02" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.043770  388805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.238799  388805 request.go:629] Waited for 194.937207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m03
	I0419 20:06:34.238869  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-423356-m03
	I0419 20:06:34.238882  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.238894  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.238904  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.242497  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.438822  388805 request.go:629] Waited for 195.323331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:34.438908  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-423356-m03
	I0419 20:06:34.438914  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.438923  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.438930  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.444594  388805 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0419 20:06:34.445423  388805 pod_ready.go:92] pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace has status "Ready":"True"
	I0419 20:06:34.445457  388805 pod_ready.go:81] duration metric: took 401.6787ms for pod "kube-scheduler-ha-423356-m03" in "kube-system" namespace to be "Ready" ...
	I0419 20:06:34.445473  388805 pod_ready.go:38] duration metric: took 12.802663415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0419 20:06:34.445496  388805 api_server.go:52] waiting for apiserver process to appear ...
	I0419 20:06:34.445573  388805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:06:34.461697  388805 api_server.go:72] duration metric: took 19.649486037s to wait for apiserver process to appear ...
	I0419 20:06:34.461723  388805 api_server.go:88] waiting for apiserver healthz status ...
	I0419 20:06:34.461747  388805 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0419 20:06:34.467927  388805 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0419 20:06:34.468027  388805 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0419 20:06:34.468041  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.468053  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.468060  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.468935  388805 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0419 20:06:34.469021  388805 api_server.go:141] control plane version: v1.30.0
	I0419 20:06:34.469040  388805 api_server.go:131] duration metric: took 7.309501ms to wait for apiserver health ...
	I0419 20:06:34.469053  388805 system_pods.go:43] waiting for kube-system pods to appear ...
	I0419 20:06:34.639464  388805 request.go:629] Waited for 170.339784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:34.639548  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:34.639554  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.639562  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.639570  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.646376  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:34.652584  388805 system_pods.go:59] 24 kube-system pods found
	I0419 20:06:34.652613  388805 system_pods.go:61] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:06:34.652619  388805 system_pods.go:61] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:06:34.652623  388805 system_pods.go:61] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:06:34.652627  388805 system_pods.go:61] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:06:34.652642  388805 system_pods.go:61] "etcd-ha-423356-m03" [71cf5f8a-a1d9-4b63-9ea9-6613f414aef2] Running
	I0419 20:06:34.652648  388805 system_pods.go:61] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:06:34.652653  388805 system_pods.go:61] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:06:34.652658  388805 system_pods.go:61] "kindnet-fkd5h" [51c38fb9-3969-4d58-9d80-a80e783a27de] Running
	I0419 20:06:34.652663  388805 system_pods.go:61] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:06:34.652669  388805 system_pods.go:61] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:06:34.652674  388805 system_pods.go:61] "kube-apiserver-ha-423356-m03" [97f56f0f-596b-4afb-a960-c2cb16cc57da] Running
	I0419 20:06:34.652677  388805 system_pods.go:61] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:06:34.652685  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:06:34.652688  388805 system_pods.go:61] "kube-controller-manager-ha-423356-m03" [b47707f2-70d7-4e46-84ff-3c16267a050c] Running
	I0419 20:06:34.652691  388805 system_pods.go:61] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:06:34.652694  388805 system_pods.go:61] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:06:34.652700  388805 system_pods.go:61] "kube-proxy-sr4gd" [5d9df920-7b11-4ba5-8811-1aacbc7aa08b] Running
	I0419 20:06:34.652702  388805 system_pods.go:61] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:06:34.652705  388805 system_pods.go:61] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:06:34.652708  388805 system_pods.go:61] "kube-scheduler-ha-423356-m03" [adce0845-d4c7-4a4f-ae6b-013b3fa69963] Running
	I0419 20:06:34.652711  388805 system_pods.go:61] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:06:34.652715  388805 system_pods.go:61] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:06:34.652720  388805 system_pods.go:61] "kube-vip-ha-423356-m03" [742e23a9-c944-4710-a12f-f76f1ea533e9] Running
	I0419 20:06:34.652722  388805 system_pods.go:61] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:06:34.652730  388805 system_pods.go:74] duration metric: took 183.666504ms to wait for pod list to return data ...
	I0419 20:06:34.652741  388805 default_sa.go:34] waiting for default service account to be created ...
	I0419 20:06:34.839213  388805 request.go:629] Waited for 186.394288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:06:34.839326  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0419 20:06:34.839342  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:34.839351  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:34.839357  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:34.843181  388805 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0419 20:06:34.843319  388805 default_sa.go:45] found service account: "default"
	I0419 20:06:34.843338  388805 default_sa.go:55] duration metric: took 190.589653ms for default service account to be created ...
	I0419 20:06:34.843354  388805 system_pods.go:116] waiting for k8s-apps to be running ...
	I0419 20:06:35.039124  388805 request.go:629] Waited for 195.686214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:35.039191  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0419 20:06:35.039197  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:35.039206  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:35.039211  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:35.045702  388805 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0419 20:06:35.052867  388805 system_pods.go:86] 24 kube-system pods found
	I0419 20:06:35.052895  388805 system_pods.go:89] "coredns-7db6d8ff4d-9td9f" [ea98cb5e-6a87-4ed0-8a55-26b77c219151] Running
	I0419 20:06:35.052901  388805 system_pods.go:89] "coredns-7db6d8ff4d-rr7zk" [7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5] Running
	I0419 20:06:35.052905  388805 system_pods.go:89] "etcd-ha-423356" [cefc3a8f-b213-49e8-a7d8-81490dec505e] Running
	I0419 20:06:35.052909  388805 system_pods.go:89] "etcd-ha-423356-m02" [6d192926-c819-45aa-8358-d49096e8a053] Running
	I0419 20:06:35.052913  388805 system_pods.go:89] "etcd-ha-423356-m03" [71cf5f8a-a1d9-4b63-9ea9-6613f414aef2] Running
	I0419 20:06:35.052917  388805 system_pods.go:89] "kindnet-7ktc2" [4d3c878f-857f-4101-ae13-f359b6de5c9e] Running
	I0419 20:06:35.052921  388805 system_pods.go:89] "kindnet-bqwfr" [1c28a900-318f-4bdc-ba7b-6cf349955c64] Running
	I0419 20:06:35.052925  388805 system_pods.go:89] "kindnet-fkd5h" [51c38fb9-3969-4d58-9d80-a80e783a27de] Running
	I0419 20:06:35.052929  388805 system_pods.go:89] "kube-apiserver-ha-423356" [513c9e06-0aa0-40f1-8c43-9b816a01f645] Running
	I0419 20:06:35.052935  388805 system_pods.go:89] "kube-apiserver-ha-423356-m02" [316ecffd-ce6c-42d0-91f2-68499ee4f7f8] Running
	I0419 20:06:35.052939  388805 system_pods.go:89] "kube-apiserver-ha-423356-m03" [97f56f0f-596b-4afb-a960-c2cb16cc57da] Running
	I0419 20:06:35.052943  388805 system_pods.go:89] "kube-controller-manager-ha-423356" [35247a4d-c96a-411e-8da8-10659b7fbfde] Running
	I0419 20:06:35.052951  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m02" [046f469e-c072-4509-8f2b-413893fffdfe] Running
	I0419 20:06:35.052955  388805 system_pods.go:89] "kube-controller-manager-ha-423356-m03" [b47707f2-70d7-4e46-84ff-3c16267a050c] Running
	I0419 20:06:35.052967  388805 system_pods.go:89] "kube-proxy-chd2r" [316420ae-b773-4dd6-b49c-d8a9d6d34752] Running
	I0419 20:06:35.052971  388805 system_pods.go:89] "kube-proxy-d56ch" [5dd81a34-6d1b-4713-bd44-7a3489b33cb3] Running
	I0419 20:06:35.052974  388805 system_pods.go:89] "kube-proxy-sr4gd" [5d9df920-7b11-4ba5-8811-1aacbc7aa08b] Running
	I0419 20:06:35.052981  388805 system_pods.go:89] "kube-scheduler-ha-423356" [800cdb1f-2fd2-4855-8354-799039225749] Running
	I0419 20:06:35.052986  388805 system_pods.go:89] "kube-scheduler-ha-423356-m02" [229bf35c-6420-498f-b616-277de36de6ef] Running
	I0419 20:06:35.052993  388805 system_pods.go:89] "kube-scheduler-ha-423356-m03" [adce0845-d4c7-4a4f-ae6b-013b3fa69963] Running
	I0419 20:06:35.052996  388805 system_pods.go:89] "kube-vip-ha-423356" [4385b850-a4b2-4f21-acf1-3d720198e1c2] Running
	I0419 20:06:35.053000  388805 system_pods.go:89] "kube-vip-ha-423356-m02" [f01cea8f-66d7-4967-b24f-21e2b9e15146] Running
	I0419 20:06:35.053006  388805 system_pods.go:89] "kube-vip-ha-423356-m03" [742e23a9-c944-4710-a12f-f76f1ea533e9] Running
	I0419 20:06:35.053009  388805 system_pods.go:89] "storage-provisioner" [956e5c6c-de0e-4f78-9151-d456dc732bdd] Running
	I0419 20:06:35.053016  388805 system_pods.go:126] duration metric: took 209.65671ms to wait for k8s-apps to be running ...
	I0419 20:06:35.053025  388805 system_svc.go:44] waiting for kubelet service to be running ....
	I0419 20:06:35.053072  388805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:06:35.068894  388805 system_svc.go:56] duration metric: took 15.857445ms WaitForService to wait for kubelet
	I0419 20:06:35.068923  388805 kubeadm.go:576] duration metric: took 20.256716597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:06:35.068945  388805 node_conditions.go:102] verifying NodePressure condition ...
	I0419 20:06:35.239218  388805 request.go:629] Waited for 170.169877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0419 20:06:35.239276  388805 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0419 20:06:35.239281  388805 round_trippers.go:469] Request Headers:
	I0419 20:06:35.239289  388805 round_trippers.go:473]     Accept: application/json, */*
	I0419 20:06:35.239294  388805 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0419 20:06:35.243544  388805 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0419 20:06:35.244709  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244732  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244750  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244754  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244758  388805 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0419 20:06:35.244761  388805 node_conditions.go:123] node cpu capacity is 2
	I0419 20:06:35.244765  388805 node_conditions.go:105] duration metric: took 175.814541ms to run NodePressure ...
	I0419 20:06:35.244777  388805 start.go:240] waiting for startup goroutines ...
	I0419 20:06:35.244801  388805 start.go:254] writing updated cluster config ...
	I0419 20:06:35.245141  388805 ssh_runner.go:195] Run: rm -f paused
	I0419 20:06:35.298665  388805 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0419 20:06:35.300894  388805 out.go:177] * Done! kubectl is now configured to use "ha-423356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.766119173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557472766044170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd6f9e17-9f10-4c6c-8f82-2197ac5ccd3b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.766839931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b416c6b-d3f1-45e4-b6bc-e1c650178cd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.766900290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b416c6b-d3f1-45e4-b6bc-e1c650178cd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.767234419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b416c6b-d3f1-45e4-b6bc-e1c650178cd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.809205239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=418b8f38-91c7-4a9e-a61e-b142575e139c name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.809340040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=418b8f38-91c7-4a9e-a61e-b142575e139c name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.811327299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70cd890f-e966-406c-8455-d0a13e2609d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.814564199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557472814526575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70cd890f-e966-406c-8455-d0a13e2609d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.815399709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=889e9c0a-5a22-47b0-ad77-f81d6f7e1785 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.815509909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=889e9c0a-5a22-47b0-ad77-f81d6f7e1785 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.815883899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=889e9c0a-5a22-47b0-ad77-f81d6f7e1785 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.865167856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e859cbd-745e-40ae-9923-b9412747a4bc name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.865274442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e859cbd-745e-40ae-9923-b9412747a4bc name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.866721004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64c12673-bdbb-4e96-8e9e-e25a26ba77e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.867643619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557472867618746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64c12673-bdbb-4e96-8e9e-e25a26ba77e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.869153783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=256cdda6-bc31-4b9a-b3cb-7922e1163b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.869228082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=256cdda6-bc31-4b9a-b3cb-7922e1163b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.869525922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=256cdda6-bc31-4b9a-b3cb-7922e1163b4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.918536557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4db26c02-0625-4eba-9350-40b5eed8d0b2 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.918633702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4db26c02-0625-4eba-9350-40b5eed8d0b2 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.920093642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d031d128-98dc-4c26-857b-b8e7bac8db33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.920616898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557472920594597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d031d128-98dc-4c26-857b-b8e7bac8db33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.921487331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9c825c3-bcbd-4880-a0f3-59286a7d19dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.921562254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9c825c3-bcbd-4880-a0f3-59286a7d19dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:11:12 ha-423356 crio[682]: time="2024-04-19 20:11:12.921836239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557199513600592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040508330751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7e6d8be93a847174a2a6b4accd8be1a47b774b0e42858e0c714d6c91f06715,PodSandboxId:8f591c7ca632f6bad17108b2ab1619ebde69347203bba0b0d9f05d430941c870,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557040411265189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557040394742825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-af
d8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f,PodSandboxId:96603b0da41287ce1a900056e0666516a825b53e64896a04df176229d1e50f6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17135570
38604913924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557038567301292,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56f50a4f747592a41c44054a8663d4e9ad20d2157caa39b25cf1603cb93ec7a5,PodSandboxId:cbc67ae14f71d52f0d48f935b0903879e4afc380e2045d83d1ed54f1a1a34efc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557021327367504,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a320c16f8db03f2789d3dd12ee4abe3e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad,PodSandboxId:80c3450b238ada185b190d7ecc976dd7f972a17a1587d6fdca889d804c2ecda4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557018534976653,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557018532520170,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6,PodSandboxId:ebb98898864fe63bd725e9b7521f11047f502f1fe523217483bf9e25b7ba7fbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557018508350279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557018403376645,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9c825c3-bcbd-4880-a0f3-59286a7d19dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b80b69bd108f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   027b57294cfbd       busybox-fc5497c4f-wqfc4
	dcfa7c435542c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   14c798e2b76b0       coredns-7db6d8ff4d-9td9f
	3b7e6d8be93a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   8f591c7ca632f       storage-provisioner
	2382f52abc364       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   8a34b24c4a7dd       coredns-7db6d8ff4d-rr7zk
	5b9312aae8712       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   96603b0da4128       kindnet-bqwfr
	b5377046480e9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   a9af78af7cd87       kube-proxy-chd2r
	56f50a4f74759       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   cbc67ae14f71d       kube-vip-ha-423356
	e7d5dc9bb5064       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   80c3450b238ad       kube-controller-manager-ha-423356
	7f1baf88d5884       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   68e93a81da913       kube-scheduler-ha-423356
	6765b5ae2f794       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   ebb98898864fe       kube-apiserver-ha-423356
	1572778d3f528       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   9ba5078b4acef       etcd-ha-423356
	
	
	==> coredns [2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24] <==
	[INFO] 10.244.2.2:33276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001354s
	[INFO] 10.244.2.2:40300 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253575s
	[INFO] 10.244.2.2:56973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142218s
	[INFO] 10.244.2.2:35913 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176895s
	[INFO] 10.244.1.2:40511 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132923s
	[INFO] 10.244.1.2:34902 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201571s
	[INFO] 10.244.1.2:53225 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001465991s
	[INFO] 10.244.1.2:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258304s
	[INFO] 10.244.1.2:59316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128123s
	[INFO] 10.244.1.2:48977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110722s
	[INFO] 10.244.0.4:40375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793494s
	[INFO] 10.244.0.4:60622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049591s
	[INFO] 10.244.0.4:34038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003778s
	[INFO] 10.244.0.4:51412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043214s
	[INFO] 10.244.0.4:56955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042946s
	[INFO] 10.244.2.2:46864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134976s
	[INFO] 10.244.2.2:34230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011483s
	[INFO] 10.244.1.2:38189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097409s
	[INFO] 10.244.1.2:33041 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080538s
	[INFO] 10.244.0.4:37791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018566s
	[INFO] 10.244.0.4:46485 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061131s
	[INFO] 10.244.0.4:50872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086293s
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142168s
	[INFO] 10.244.1.2:55061 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177752s
	[INFO] 10.244.0.4:44369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008812s
	
	
	==> coredns [dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5] <==
	[INFO] 10.244.0.4:49749 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472384s
	[INFO] 10.244.0.4:55334 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002090348s
	[INFO] 10.244.2.2:56357 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003829125s
	[INFO] 10.244.2.2:35752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290736s
	[INFO] 10.244.2.2:48589 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003091553s
	[INFO] 10.244.2.2:49259 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138019s
	[INFO] 10.244.1.2:50375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000377277s
	[INFO] 10.244.1.2:43502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916758s
	[INFO] 10.244.0.4:50440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109012s
	[INFO] 10.244.0.4:50457 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001351323s
	[INFO] 10.244.0.4:57273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119319s
	[INFO] 10.244.2.2:49275 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210181s
	[INFO] 10.244.2.2:41514 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192084s
	[INFO] 10.244.1.2:56219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000465859s
	[INFO] 10.244.1.2:60572 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114905s
	[INFO] 10.244.0.4:52874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098566s
	[INFO] 10.244.2.2:47734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249839s
	[INFO] 10.244.2.2:50981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179648s
	[INFO] 10.244.2.2:34738 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109005s
	[INFO] 10.244.1.2:37966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181053s
	[INFO] 10.244.1.2:48636 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116821s
	[INFO] 10.244.1.2:52580 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000260337s
	[INFO] 10.244.0.4:43327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088111s
	[INFO] 10.244.0.4:47823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105899s
	[INFO] 10.244.0.4:41223 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050192s
	
	
	==> describe nodes <==
	Name:               ha-423356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:11:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:06:48 +0000   Fri, 19 Apr 2024 20:03:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-423356
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133e52820e114c7aa16933b82eb1ac6a
	  System UUID:                133e5282-0e11-4c7a-a169-33b82eb1ac6a
	  Boot ID:                    752cc004-2412-44ee-9782-2d20c1c3993d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqfc4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-7db6d8ff4d-9td9f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 coredns-7db6d8ff4d-rr7zk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m16s
	  kube-system                 etcd-ha-423356                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m30s
	  kube-system                 kindnet-bqwfr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m16s
	  kube-system                 kube-apiserver-ha-423356             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-ha-423356    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-chd2r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-scheduler-ha-423356             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-vip-ha-423356                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m14s  kube-proxy       
	  Normal  Starting                 7m29s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s  kubelet          Node ha-423356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s  kubelet          Node ha-423356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s  kubelet          Node ha-423356 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m17s  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal  NodeReady                7m14s  kubelet          Node ha-423356 status is now: NodeReady
	  Normal  RegisteredNode           6m4s   node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal  RegisteredNode           4m44s  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	
	
	Name:               ha-423356-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:04:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:07:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Apr 2024 20:06:53 +0000   Fri, 19 Apr 2024 20:08:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-423356-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 346b871eba5f43789a16ce3dbbb4ec2c
	  System UUID:                346b871e-ba5f-4378-9a16-ce3dbbb4ec2c
	  Boot ID:                    c563aa8d-17e5-4d9b-a5f2-9aac493d81ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fq5c2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-423356-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m21s
	  kube-system                 kindnet-7ktc2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ha-423356-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-controller-manager-ha-423356-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-proxy-d56ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ha-423356-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-423356-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-423356-m02 status is now: NodeNotReady
	
	
	Name:               ha-423356-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_06_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:06:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:06:42 +0000   Fri, 19 Apr 2024 20:06:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-423356-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98c76a7ef5ce4a80bed88d9102770ac6
	  System UUID:                98c76a7e-f5ce-4a80-bed8-8d9102770ac6
	  Boot ID:                    a8bf7a9b-27ec-43ce-9057-8997d2be8da7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4t8f9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-ha-423356-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kindnet-fkd5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-423356-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-423356-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-sr4gd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-423356-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-423356-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m57s                kube-proxy       
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-423356-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	
	
	Name:               ha-423356-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:07:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:11:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:07:46 +0000   Fri, 19 Apr 2024 20:07:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-423356-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f1d7a6307945baa5aa5c71ec020b88
	  System UUID:                22f1d7a6-3079-45ba-a5aa-5c71ec020b88
	  Boot ID:                    d9c8dea3-edf9-4bd2-bec6-870cc3e73878
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wj85m       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m58s
	  kube-system                 kube-proxy-7x69m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x3 over 3m59s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x3 over 3m59s)  kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x3 over 3m59s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal  NodeReady                3m48s                  kubelet          Node ha-423356-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr19 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051835] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040743] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.581158] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.878121] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.106083] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.064815] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057768] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.176439] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.158804] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285884] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.425458] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.068806] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.328517] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.914724] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.592182] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.083040] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.869346] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:04] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d] <==
	{"level":"warn","ts":"2024-04-19T20:11:13.189796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.197237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.20234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.212572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.218939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.225426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.226773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.231776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.232753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.235988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.236724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.243316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.248256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.253538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.257752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.26303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.272268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.278538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.284678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.289672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.293014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.30038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.305538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.312349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-19T20:11:13.337394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:11:13 up 8 min,  0 users,  load average: 0.22, 0.17, 0.09
	Linux ha-423356 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b9312aae871204d98da87209a637760b82b8af0a35f57c4d5d62a76976d3a1f] <==
	I0419 20:10:40.139298       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:10:50.152733       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:10:50.152993       1 main.go:227] handling current node
	I0419 20:10:50.153043       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:10:50.153200       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:10:50.153345       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:10:50.153368       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:10:50.153438       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:10:50.153461       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:11:00.170964       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:11:00.171036       1 main.go:227] handling current node
	I0419 20:11:00.171388       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:11:00.171437       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:11:00.171580       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:11:00.171622       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:11:00.171713       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:11:00.171758       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:11:10.184174       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:11:10.184218       1 main.go:227] handling current node
	I0419 20:11:10.184230       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:11:10.184236       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:11:10.184426       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:11:10.184459       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:11:10.184509       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:11:10.184536       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6765b5ae2f7949557fe5e44ef86ccaea47af1f1ffd35b88efa3766eba66780e6] <==
	I0419 20:03:44.731656       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:03:44.751469       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0419 20:03:44.766346       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:03:57.422370       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0419 20:03:57.823882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0419 20:04:51.720594       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0419 20:04:51.720661       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0419 20:04:51.720608       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 7.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0419 20:04:51.721807       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0419 20:04:51.721970       1 timeout.go:142] post-timeout activity - time-elapsed: 1.482905ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0419 20:06:41.074036       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55302: use of closed network connection
	E0419 20:06:41.302880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55322: use of closed network connection
	E0419 20:06:41.528513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55346: use of closed network connection
	E0419 20:06:41.980495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55378: use of closed network connection
	E0419 20:06:42.178624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55394: use of closed network connection
	E0419 20:06:42.395012       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55400: use of closed network connection
	E0419 20:06:42.603522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55414: use of closed network connection
	E0419 20:06:42.813573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55432: use of closed network connection
	E0419 20:06:43.133878       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55456: use of closed network connection
	E0419 20:06:43.362266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55468: use of closed network connection
	E0419 20:06:43.584573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55482: use of closed network connection
	E0419 20:06:43.819350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55502: use of closed network connection
	E0419 20:06:44.022444       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55520: use of closed network connection
	E0419 20:06:44.214707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55542: use of closed network connection
	W0419 20:07:53.556297       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.7]
	
	
	==> kube-controller-manager [e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad] <==
	I0419 20:04:50.893653       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:04:51.810679       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m02"
	I0419 20:06:11.318644       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423356-m03\" does not exist"
	I0419 20:06:11.337213       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:06:11.840978       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m03"
	I0419 20:06:36.312859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.726466ms"
	I0419 20:06:36.368249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.266133ms"
	I0419 20:06:36.368971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="154.345µs"
	I0419 20:06:36.535948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.442461ms"
	I0419 20:06:36.712001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.792385ms"
	E0419 20:06:36.712207       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0419 20:06:36.712826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.915µs"
	I0419 20:06:36.718154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="237.06µs"
	I0419 20:06:40.346315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.023017ms"
	I0419 20:06:40.346537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="167.442µs"
	I0419 20:06:40.534871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.413064ms"
	I0419 20:06:40.589349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.405859ms"
	I0419 20:06:40.589467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.471µs"
	I0419 20:07:15.307504       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423356-m04\" does not exist"
	I0419 20:07:15.352557       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423356-m04" podCIDRs=["10.244.3.0/24"]
	I0419 20:07:16.871281       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m04"
	I0419 20:07:25.883416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423356-m04"
	I0419 20:08:26.913893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423356-m04"
	I0419 20:08:27.014609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.053893ms"
	I0419 20:08:27.015120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.024µs"
	
	
	==> kube-proxy [b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573] <==
	I0419 20:03:58.715526       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:03:58.723910       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0419 20:03:58.792221       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:03:58.792331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:03:58.792410       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:03:58.797371       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:03:58.797629       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:03:58.797669       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:03:58.798793       1 config.go:192] "Starting service config controller"
	I0419 20:03:58.798834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:03:58.798871       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:03:58.798876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:03:58.799665       1 config.go:319] "Starting node config controller"
	I0419 20:03:58.799731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:03:58.899283       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:03:58.899412       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:03:58.899868       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861] <==
	W0419 20:03:42.868111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:03:42.868254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:03:42.939698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 20:03:42.939757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 20:03:42.982791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:03:42.982846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0419 20:03:45.558861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0419 20:06:11.516884       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gzbf4\": pod kube-proxy-gzbf4 is already assigned to node \"ha-423356-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gzbf4" node="ha-423356-m03"
	E0419 20:06:11.517815       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c5jvm\": pod kindnet-c5jvm is already assigned to node \"ha-423356-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-c5jvm" node="ha-423356-m03"
	E0419 20:06:11.518908       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 650472a1-b2bf-4cc9-97ea-12ec043e8728(kube-system/kindnet-c5jvm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c5jvm"
	E0419 20:06:11.519146       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c5jvm\": pod kindnet-c5jvm is already assigned to node \"ha-423356-m03\"" pod="kube-system/kindnet-c5jvm"
	I0419 20:06:11.519210       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c5jvm" node="ha-423356-m03"
	E0419 20:06:11.518793       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5ebded0d-82e1-4df3-9eac-43f34b7b74db(kube-system/kube-proxy-gzbf4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gzbf4"
	E0419 20:06:11.520145       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gzbf4\": pod kube-proxy-gzbf4 is already assigned to node \"ha-423356-m03\"" pod="kube-system/kube-proxy-gzbf4"
	I0419 20:06:11.520170       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gzbf4" node="ha-423356-m03"
	E0419 20:06:36.281563       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fq5c2\": pod busybox-fc5497c4f-fq5c2 is already assigned to node \"ha-423356-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-fq5c2" node="ha-423356-m02"
	E0419 20:06:36.281696       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4cc0bdd1-d446-460a-a41f-fcd5ef8aa55b(default/busybox-fc5497c4f-fq5c2) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-fq5c2"
	E0419 20:06:36.282008       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fq5c2\": pod busybox-fc5497c4f-fq5c2 is already assigned to node \"ha-423356-m02\"" pod="default/busybox-fc5497c4f-fq5c2"
	I0419 20:06:36.282183       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fq5c2" node="ha-423356-m02"
	E0419 20:07:15.381407       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wj85m\": pod kindnet-wj85m is already assigned to node \"ha-423356-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wj85m" node="ha-423356-m04"
	E0419 20:07:15.381613       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wj85m\": pod kindnet-wj85m is already assigned to node \"ha-423356-m04\"" pod="kube-system/kindnet-wj85m"
	E0419 20:07:15.395423       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7x69m\": pod kube-proxy-7x69m is already assigned to node \"ha-423356-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7x69m" node="ha-423356-m04"
	E0419 20:07:15.395516       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b5bd3478-3c20-44bd-bb1a-26c616d96c19(kube-system/kube-proxy-7x69m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7x69m"
	E0419 20:07:15.395546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7x69m\": pod kube-proxy-7x69m is already assigned to node \"ha-423356-m04\"" pod="kube-system/kube-proxy-7x69m"
	I0419 20:07:15.395576       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7x69m" node="ha-423356-m04"
	
	
	==> kubelet <==
	Apr 19 20:06:44 ha-423356 kubelet[1380]: E0419 20:06:44.674926    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:06:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:06:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:07:44 ha-423356 kubelet[1380]: E0419 20:07:44.669196    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:07:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:07:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:08:44 ha-423356 kubelet[1380]: E0419 20:08:44.674189    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:08:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:08:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:09:44 ha-423356 kubelet[1380]: E0419 20:09:44.669451    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:09:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:09:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:10:44 ha-423356 kubelet[1380]: E0419 20:10:44.668951    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:10:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:10:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:10:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:10:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-423356 -n ha-423356
helpers_test.go:261: (dbg) Run:  kubectl --context ha-423356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-423356 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-423356 -v=7 --alsologtostderr
E0419 20:12:10.227345  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:12:37.913933  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-423356 -v=7 --alsologtostderr: exit status 82 (2m2.730683573s)

                                                
                                                
-- stdout --
	* Stopping node "ha-423356-m04"  ...
	* Stopping node "ha-423356-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:11:14.876549  394639 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:11:14.876702  394639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:14.876712  394639 out.go:304] Setting ErrFile to fd 2...
	I0419 20:11:14.876716  394639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:11:14.876891  394639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:11:14.877127  394639 out.go:298] Setting JSON to false
	I0419 20:11:14.877207  394639 mustload.go:65] Loading cluster: ha-423356
	I0419 20:11:14.877637  394639 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:11:14.877733  394639 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:11:14.877924  394639 mustload.go:65] Loading cluster: ha-423356
	I0419 20:11:14.878058  394639 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:11:14.878101  394639 stop.go:39] StopHost: ha-423356-m04
	I0419 20:11:14.878490  394639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:14.878537  394639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:14.895148  394639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0419 20:11:14.895577  394639 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:14.896120  394639 main.go:141] libmachine: Using API Version  1
	I0419 20:11:14.896153  394639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:14.896552  394639 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:14.898986  394639 out.go:177] * Stopping node "ha-423356-m04"  ...
	I0419 20:11:14.900623  394639 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0419 20:11:14.900668  394639 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:11:14.900900  394639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0419 20:11:14.900926  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:11:14.903521  394639 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:14.904020  394639 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:07:00 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:11:14.904049  394639 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:11:14.904161  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:11:14.904319  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:11:14.904461  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:11:14.904597  394639 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:11:14.991903  394639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0419 20:11:15.045362  394639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0419 20:11:15.098696  394639 main.go:141] libmachine: Stopping "ha-423356-m04"...
	I0419 20:11:15.098730  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:11:15.100485  394639 main.go:141] libmachine: (ha-423356-m04) Calling .Stop
	I0419 20:11:15.104239  394639 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 0/120
	I0419 20:11:16.105618  394639 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 1/120
	I0419 20:11:17.107512  394639 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:11:17.108700  394639 main.go:141] libmachine: Machine "ha-423356-m04" was stopped.
	I0419 20:11:17.108718  394639 stop.go:75] duration metric: took 2.208100525s to stop
	I0419 20:11:17.108771  394639 stop.go:39] StopHost: ha-423356-m03
	I0419 20:11:17.109113  394639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:11:17.109157  394639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:11:17.123879  394639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0419 20:11:17.124383  394639 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:11:17.125060  394639 main.go:141] libmachine: Using API Version  1
	I0419 20:11:17.125084  394639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:11:17.125486  394639 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:11:17.127579  394639 out.go:177] * Stopping node "ha-423356-m03"  ...
	I0419 20:11:17.128971  394639 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0419 20:11:17.129005  394639 main.go:141] libmachine: (ha-423356-m03) Calling .DriverName
	I0419 20:11:17.129247  394639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0419 20:11:17.129272  394639 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHHostname
	I0419 20:11:17.131862  394639 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:17.132372  394639 main.go:141] libmachine: (ha-423356-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:cf:fe", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:05:28 +0000 UTC Type:0 Mac:52:54:00:fc:cf:fe Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-423356-m03 Clientid:01:52:54:00:fc:cf:fe}
	I0419 20:11:17.132410  394639 main.go:141] libmachine: (ha-423356-m03) DBG | domain ha-423356-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:fc:cf:fe in network mk-ha-423356
	I0419 20:11:17.132453  394639 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHPort
	I0419 20:11:17.132623  394639 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHKeyPath
	I0419 20:11:17.132786  394639 main.go:141] libmachine: (ha-423356-m03) Calling .GetSSHUsername
	I0419 20:11:17.132937  394639 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m03/id_rsa Username:docker}
	I0419 20:11:17.220744  394639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0419 20:11:17.273655  394639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0419 20:11:17.335056  394639 main.go:141] libmachine: Stopping "ha-423356-m03"...
	I0419 20:11:17.335091  394639 main.go:141] libmachine: (ha-423356-m03) Calling .GetState
	I0419 20:11:17.336798  394639 main.go:141] libmachine: (ha-423356-m03) Calling .Stop
	I0419 20:11:17.340435  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 0/120
	I0419 20:11:18.342112  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 1/120
	I0419 20:11:19.343858  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 2/120
	I0419 20:11:20.345227  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 3/120
	I0419 20:11:21.346828  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 4/120
	I0419 20:11:22.348808  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 5/120
	I0419 20:11:23.351207  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 6/120
	I0419 20:11:24.353240  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 7/120
	I0419 20:11:25.354681  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 8/120
	I0419 20:11:26.356226  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 9/120
	I0419 20:11:27.358647  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 10/120
	I0419 20:11:28.360379  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 11/120
	I0419 20:11:29.362116  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 12/120
	I0419 20:11:30.363538  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 13/120
	I0419 20:11:31.365160  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 14/120
	I0419 20:11:32.367303  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 15/120
	I0419 20:11:33.369061  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 16/120
	I0419 20:11:34.370686  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 17/120
	I0419 20:11:35.372211  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 18/120
	I0419 20:11:36.373838  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 19/120
	I0419 20:11:37.375870  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 20/120
	I0419 20:11:38.377604  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 21/120
	I0419 20:11:39.379239  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 22/120
	I0419 20:11:40.380800  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 23/120
	I0419 20:11:41.382540  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 24/120
	I0419 20:11:42.384842  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 25/120
	I0419 20:11:43.386500  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 26/120
	I0419 20:11:44.388007  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 27/120
	I0419 20:11:45.389695  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 28/120
	I0419 20:11:46.391655  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 29/120
	I0419 20:11:47.394305  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 30/120
	I0419 20:11:48.395973  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 31/120
	I0419 20:11:49.397660  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 32/120
	I0419 20:11:50.399180  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 33/120
	I0419 20:11:51.401054  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 34/120
	I0419 20:11:52.402820  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 35/120
	I0419 20:11:53.404126  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 36/120
	I0419 20:11:54.405463  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 37/120
	I0419 20:11:55.407094  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 38/120
	I0419 20:11:56.408387  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 39/120
	I0419 20:11:57.410541  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 40/120
	I0419 20:11:58.411709  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 41/120
	I0419 20:11:59.413036  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 42/120
	I0419 20:12:00.414431  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 43/120
	I0419 20:12:01.415742  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 44/120
	I0419 20:12:02.417723  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 45/120
	I0419 20:12:03.419222  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 46/120
	I0419 20:12:04.420563  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 47/120
	I0419 20:12:05.422921  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 48/120
	I0419 20:12:06.424328  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 49/120
	I0419 20:12:07.426134  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 50/120
	I0419 20:12:08.427633  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 51/120
	I0419 20:12:09.429069  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 52/120
	I0419 20:12:10.430516  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 53/120
	I0419 20:12:11.431924  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 54/120
	I0419 20:12:12.433874  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 55/120
	I0419 20:12:13.435275  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 56/120
	I0419 20:12:14.436906  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 57/120
	I0419 20:12:15.438505  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 58/120
	I0419 20:12:16.440076  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 59/120
	I0419 20:12:17.442132  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 60/120
	I0419 20:12:18.443442  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 61/120
	I0419 20:12:19.444880  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 62/120
	I0419 20:12:20.446218  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 63/120
	I0419 20:12:21.447533  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 64/120
	I0419 20:12:22.449372  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 65/120
	I0419 20:12:23.450818  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 66/120
	I0419 20:12:24.452157  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 67/120
	I0419 20:12:25.453459  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 68/120
	I0419 20:12:26.454770  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 69/120
	I0419 20:12:27.456705  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 70/120
	I0419 20:12:28.458022  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 71/120
	I0419 20:12:29.459484  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 72/120
	I0419 20:12:30.460803  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 73/120
	I0419 20:12:31.463206  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 74/120
	I0419 20:12:32.465385  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 75/120
	I0419 20:12:33.467084  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 76/120
	I0419 20:12:34.468441  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 77/120
	I0419 20:12:35.469997  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 78/120
	I0419 20:12:36.471472  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 79/120
	I0419 20:12:37.473233  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 80/120
	I0419 20:12:38.475323  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 81/120
	I0419 20:12:39.477331  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 82/120
	I0419 20:12:40.479644  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 83/120
	I0419 20:12:41.481092  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 84/120
	I0419 20:12:42.482505  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 85/120
	I0419 20:12:43.484010  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 86/120
	I0419 20:12:44.486072  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 87/120
	I0419 20:12:45.487355  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 88/120
	I0419 20:12:46.488806  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 89/120
	I0419 20:12:47.490673  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 90/120
	I0419 20:12:48.492142  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 91/120
	I0419 20:12:49.493544  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 92/120
	I0419 20:12:50.494983  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 93/120
	I0419 20:12:51.497182  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 94/120
	I0419 20:12:52.498575  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 95/120
	I0419 20:12:53.500039  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 96/120
	I0419 20:12:54.501404  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 97/120
	I0419 20:12:55.502852  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 98/120
	I0419 20:12:56.504393  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 99/120
	I0419 20:12:57.506323  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 100/120
	I0419 20:12:58.508193  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 101/120
	I0419 20:12:59.509620  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 102/120
	I0419 20:13:00.511166  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 103/120
	I0419 20:13:01.512582  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 104/120
	I0419 20:13:02.514686  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 105/120
	I0419 20:13:03.516446  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 106/120
	I0419 20:13:04.518003  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 107/120
	I0419 20:13:05.519529  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 108/120
	I0419 20:13:06.520980  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 109/120
	I0419 20:13:07.523312  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 110/120
	I0419 20:13:08.525072  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 111/120
	I0419 20:13:09.527349  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 112/120
	I0419 20:13:10.528683  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 113/120
	I0419 20:13:11.530334  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 114/120
	I0419 20:13:12.532363  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 115/120
	I0419 20:13:13.534076  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 116/120
	I0419 20:13:14.535414  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 117/120
	I0419 20:13:15.536976  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 118/120
	I0419 20:13:16.538483  394639 main.go:141] libmachine: (ha-423356-m03) Waiting for machine to stop 119/120
	I0419 20:13:17.540017  394639 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0419 20:13:17.540071  394639 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0419 20:13:17.542329  394639 out.go:177] 
	W0419 20:13:17.543897  394639 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0419 20:13:17.543913  394639 out.go:239] * 
	* 
	W0419 20:13:17.547282  394639 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 20:13:17.549020  394639 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-423356 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-423356 --wait=true -v=7 --alsologtostderr
E0419 20:17:10.227815  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-423356 --wait=true -v=7 --alsologtostderr: (4m12.564367417s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-423356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-423356 -n ha-423356
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-423356 logs -n 25: (1.91869657s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m04 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp testdata/cp-test.txt                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m04_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03:/home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m03 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-423356 node stop m02 -v=7                                                     | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-423356 node start m02 -v=7                                                    | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-423356 -v=7                                                           | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-423356 -v=7                                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-423356 --wait=true -v=7                                                    | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:13 UTC | 19 Apr 24 20:17 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-423356                                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:17 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:13:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:13:17.613989  395150 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:13:17.614253  395150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:13:17.614278  395150 out.go:304] Setting ErrFile to fd 2...
	I0419 20:13:17.614282  395150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:13:17.614467  395150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:13:17.615060  395150 out.go:298] Setting JSON to false
	I0419 20:13:17.616067  395150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6944,"bootTime":1713550654,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:13:17.616138  395150 start.go:139] virtualization: kvm guest
	I0419 20:13:17.618808  395150 out.go:177] * [ha-423356] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:13:17.620839  395150 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:13:17.620818  395150 notify.go:220] Checking for updates...
	I0419 20:13:17.622229  395150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:13:17.623897  395150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:13:17.625359  395150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:13:17.626872  395150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:13:17.628398  395150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:13:17.630607  395150 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:13:17.630763  395150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:13:17.631472  395150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:13:17.631522  395150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:13:17.647414  395150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0419 20:13:17.647804  395150 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:13:17.648406  395150 main.go:141] libmachine: Using API Version  1
	I0419 20:13:17.648433  395150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:13:17.648792  395150 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:13:17.648974  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.687472  395150 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:13:17.688852  395150 start.go:297] selected driver: kvm2
	I0419 20:13:17.688865  395150 start.go:901] validating driver "kvm2" against &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:13:17.689016  395150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:13:17.689341  395150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:13:17.689422  395150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:13:17.704874  395150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:13:17.705625  395150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:13:17.705682  395150 cni.go:84] Creating CNI manager for ""
	I0419 20:13:17.705694  395150 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0419 20:13:17.705759  395150 start.go:340] cluster config:
	{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:13:17.705887  395150 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:13:17.707833  395150 out.go:177] * Starting "ha-423356" primary control-plane node in "ha-423356" cluster
	I0419 20:13:17.709237  395150 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:13:17.709271  395150 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:13:17.709282  395150 cache.go:56] Caching tarball of preloaded images
	I0419 20:13:17.709401  395150 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:13:17.709414  395150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:13:17.709535  395150 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:13:17.709724  395150 start.go:360] acquireMachinesLock for ha-423356: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:13:17.709769  395150 start.go:364] duration metric: took 25.519µs to acquireMachinesLock for "ha-423356"
	I0419 20:13:17.709805  395150 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:13:17.709813  395150 fix.go:54] fixHost starting: 
	I0419 20:13:17.710073  395150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:13:17.710101  395150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:13:17.725270  395150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0419 20:13:17.725775  395150 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:13:17.726349  395150 main.go:141] libmachine: Using API Version  1
	I0419 20:13:17.726374  395150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:13:17.726692  395150 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:13:17.726928  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.727076  395150 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:13:17.728870  395150 fix.go:112] recreateIfNeeded on ha-423356: state=Running err=<nil>
	W0419 20:13:17.728903  395150 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:13:17.730906  395150 out.go:177] * Updating the running kvm2 "ha-423356" VM ...
	I0419 20:13:17.731983  395150 machine.go:94] provisionDockerMachine start ...
	I0419 20:13:17.732000  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.732198  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.734753  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.735162  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.735181  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.735396  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.735630  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.735877  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.736052  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.736283  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.736494  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.736513  395150 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:13:17.846439  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:13:17.846483  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:17.846793  395150 buildroot.go:166] provisioning hostname "ha-423356"
	I0419 20:13:17.846825  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:17.847027  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.850089  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.850538  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.850568  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.850725  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.850918  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.851115  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.851287  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.851501  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.851679  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.851692  395150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356 && echo "ha-423356" | sudo tee /etc/hostname
	I0419 20:13:17.971434  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:13:17.971469  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.974335  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.974720  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.974751  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.974903  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.975101  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.975268  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.975386  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.975594  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.975763  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.975778  395150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:13:18.077962  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:13:18.077998  395150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:13:18.078040  395150 buildroot.go:174] setting up certificates
	I0419 20:13:18.078055  395150 provision.go:84] configureAuth start
	I0419 20:13:18.078070  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:18.078380  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:13:18.081559  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.081975  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.082015  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.082129  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.084451  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.084779  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.084799  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.084990  395150 provision.go:143] copyHostCerts
	I0419 20:13:18.085034  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:13:18.085073  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:13:18.085082  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:13:18.085148  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:13:18.085234  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:13:18.085251  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:13:18.085258  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:13:18.085280  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:13:18.085339  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:13:18.085361  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:13:18.085368  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:13:18.085388  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:13:18.085493  395150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356 san=[127.0.0.1 192.168.39.7 ha-423356 localhost minikube]
	I0419 20:13:18.273047  395150 provision.go:177] copyRemoteCerts
	I0419 20:13:18.273109  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:13:18.273136  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.275922  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.276222  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.276250  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.276434  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:18.276629  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.276795  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:18.276910  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:13:18.361571  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:13:18.361677  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:13:18.390997  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:13:18.391108  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0419 20:13:18.417953  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:13:18.418063  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:13:18.444341  395150 provision.go:87] duration metric: took 366.268199ms to configureAuth
	I0419 20:13:18.444383  395150 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:13:18.444604  395150 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:13:18.444720  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.447494  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.448012  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.448050  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.448196  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:18.448416  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.448620  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.448805  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:18.448997  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:18.449166  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:18.449190  395150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:14:49.395851  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:14:49.395886  395150 machine.go:97] duration metric: took 1m31.663890211s to provisionDockerMachine
	I0419 20:14:49.395900  395150 start.go:293] postStartSetup for "ha-423356" (driver="kvm2")
	I0419 20:14:49.395915  395150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:14:49.395943  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.396285  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:14:49.396314  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.399391  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.399927  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.399958  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.400109  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.400318  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.400473  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.400594  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.485096  395150 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:14:49.489526  395150 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:14:49.489550  395150 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:14:49.489607  395150 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:14:49.489686  395150 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:14:49.489699  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:14:49.489787  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:14:49.499424  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:14:49.525785  395150 start.go:296] duration metric: took 129.868394ms for postStartSetup
	I0419 20:14:49.525835  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.526186  395150 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0419 20:14:49.526223  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.528989  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.529393  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.529423  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.529561  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.529766  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.529956  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.530100  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	W0419 20:14:49.612117  395150 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0419 20:14:49.612148  395150 fix.go:56] duration metric: took 1m31.902335238s for fixHost
	I0419 20:14:49.612172  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.614888  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.615268  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.615290  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.615494  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.615692  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.615925  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.616084  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.616275  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:14:49.616451  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:14:49.616467  395150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:14:49.717843  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557689.687243078
	
	I0419 20:14:49.717871  395150 fix.go:216] guest clock: 1713557689.687243078
	I0419 20:14:49.717880  395150 fix.go:229] Guest: 2024-04-19 20:14:49.687243078 +0000 UTC Remote: 2024-04-19 20:14:49.61215584 +0000 UTC m=+92.049900018 (delta=75.087238ms)
	I0419 20:14:49.717910  395150 fix.go:200] guest clock delta is within tolerance: 75.087238ms
	I0419 20:14:49.717919  395150 start.go:83] releasing machines lock for "ha-423356", held for 1m32.008139997s
	I0419 20:14:49.717974  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.718318  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:14:49.721098  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.721516  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.721540  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.721701  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722383  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722592  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722715  395150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:14:49.722757  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.722808  395150 ssh_runner.go:195] Run: cat /version.json
	I0419 20:14:49.722836  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.725411  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.725713  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.725885  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.725908  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.726104  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.726140  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.726161  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.726278  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.726332  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.726428  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.726446  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.726619  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.726618  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.726744  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.840116  395150 ssh_runner.go:195] Run: systemctl --version
	I0419 20:14:49.846482  395150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:14:50.008366  395150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:14:50.016973  395150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:14:50.017059  395150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:14:50.026623  395150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:14:50.026645  395150 start.go:494] detecting cgroup driver to use...
	I0419 20:14:50.026758  395150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:14:50.043780  395150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:14:50.058105  395150 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:14:50.058168  395150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:14:50.072477  395150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:14:50.086680  395150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:14:50.242992  395150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:14:50.477328  395150 docker.go:233] disabling docker service ...
	I0419 20:14:50.477399  395150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:14:50.513473  395150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:14:50.535695  395150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:14:50.722627  395150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:14:50.912059  395150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:14:50.932826  395150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:14:50.953009  395150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:14:50.953094  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.967745  395150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:14:50.967822  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.978856  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.990101  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.004684  395150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:14:51.016114  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.027581  395150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.038874  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.050141  395150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:14:51.060450  395150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:14:51.073866  395150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:14:51.230409  395150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:14:51.629615  395150 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:14:51.629701  395150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:14:51.643172  395150 start.go:562] Will wait 60s for crictl version
	I0419 20:14:51.643235  395150 ssh_runner.go:195] Run: which crictl
	I0419 20:14:51.647524  395150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:14:51.687196  395150 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:14:51.687301  395150 ssh_runner.go:195] Run: crio --version
	I0419 20:14:51.717719  395150 ssh_runner.go:195] Run: crio --version
	I0419 20:14:51.752523  395150 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:14:51.754148  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:14:51.756956  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:51.757331  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:51.757364  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:51.757590  395150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:14:51.762895  395150 kubeadm.go:877] updating cluster {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:14:51.763090  395150 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:14:51.763156  395150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:14:51.813673  395150 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:14:51.813701  395150 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:14:51.813772  395150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:14:51.851351  395150 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:14:51.851378  395150 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:14:51.851387  395150 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.30.0 crio true true} ...
	I0419 20:14:51.851509  395150 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:14:51.851576  395150 ssh_runner.go:195] Run: crio config
	I0419 20:14:51.904408  395150 cni.go:84] Creating CNI manager for ""
	I0419 20:14:51.904443  395150 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0419 20:14:51.904464  395150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:14:51.904495  395150 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423356 NodeName:ha-423356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:14:51.904716  395150 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:14:51.904753  395150 kube-vip.go:111] generating kube-vip config ...
	I0419 20:14:51.904815  395150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:14:51.917550  395150 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:14:51.917680  395150 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:14:51.917759  395150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:14:51.928492  395150 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:14:51.928592  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 20:14:51.939393  395150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0419 20:14:51.957614  395150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:14:51.975134  395150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0419 20:14:52.006858  395150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:14:52.115125  395150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:14:52.124578  395150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:14:52.423345  395150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:14:52.469109  395150 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.7
	I0419 20:14:52.469134  395150 certs.go:194] generating shared ca certs ...
	I0419 20:14:52.469150  395150 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.469295  395150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:14:52.469341  395150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:14:52.469351  395150 certs.go:256] generating profile certs ...
	I0419 20:14:52.469417  395150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:14:52.469444  395150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a
	I0419 20:14:52.469456  395150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.111 192.168.39.254]
	I0419 20:14:52.830008  395150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a ...
	I0419 20:14:52.830049  395150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a: {Name:mk0dc2583e0f7154aa0905cbefab2d5317314ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.830242  395150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a ...
	I0419 20:14:52.830267  395150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a: {Name:mk2defad1ff8d9549d78845d6c6dd19f6514872f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.830364  395150 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:14:52.830522  395150 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:14:52.830656  395150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:14:52.830682  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:14:52.830694  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:14:52.830704  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:14:52.830713  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:14:52.830723  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:14:52.830736  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:14:52.830744  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:14:52.830756  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:14:52.830799  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:14:52.830830  395150 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:14:52.830840  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:14:52.830865  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:14:52.830892  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:14:52.830916  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:14:52.830955  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:14:52.830980  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:14:52.830997  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:52.831017  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:14:52.831732  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:14:53.082686  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:14:53.322741  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:14:53.385663  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:14:53.437076  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0419 20:14:53.479819  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 20:14:53.510914  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:14:53.544550  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:14:53.575010  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:14:53.617993  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:14:53.659021  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:14:53.730060  395150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:14:53.789720  395150 ssh_runner.go:195] Run: openssl version
	I0419 20:14:53.797548  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:14:53.813678  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.819424  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.819495  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.826718  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:14:53.838444  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:14:53.850916  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.863599  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.863670  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.870587  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:14:53.889682  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:14:53.907227  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.915094  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.915164  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.928643  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:14:53.941191  395150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:14:53.948678  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:14:53.957670  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:14:53.964025  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:14:53.972572  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:14:53.983092  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:14:53.993120  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:14:54.001177  395150 kubeadm.go:391] StartCluster: {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:14:54.001375  395150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:14:54.001453  395150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:14:54.078997  395150 cri.go:89] found id: "e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2"
	I0419 20:14:54.079026  395150 cri.go:89] found id: "51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e"
	I0419 20:14:54.079042  395150 cri.go:89] found id: "331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd"
	I0419 20:14:54.079047  395150 cri.go:89] found id: "31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40"
	I0419 20:14:54.079052  395150 cri.go:89] found id: "80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5"
	I0419 20:14:54.079056  395150 cri.go:89] found id: "483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d"
	I0419 20:14:54.079060  395150 cri.go:89] found id: "81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc"
	I0419 20:14:54.079064  395150 cri.go:89] found id: "8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	I0419 20:14:54.079068  395150 cri.go:89] found id: "95f24d776dec7f41671b86692950532aad53a72dc9d0ebde106a468c54958596"
	I0419 20:14:54.079077  395150 cri.go:89] found id: "c7f33bcee24d50606a5525fcf4daca0f2da2fd97c77a364aec2a6d62d257aacd"
	I0419 20:14:54.079081  395150 cri.go:89] found id: "dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5"
	I0419 20:14:54.079086  395150 cri.go:89] found id: "2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24"
	I0419 20:14:54.079090  395150 cri.go:89] found id: "b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573"
	I0419 20:14:54.079094  395150 cri.go:89] found id: "e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad"
	I0419 20:14:54.079101  395150 cri.go:89] found id: "7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861"
	I0419 20:14:54.079109  395150 cri.go:89] found id: "1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d"
	I0419 20:14:54.079113  395150 cri.go:89] found id: ""
	I0419 20:14:54.079175  395150 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.907739695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557850907714330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=581f94b3-0bf1-4149-baac-2f1c1b47b609 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.908763074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d990601d-cb94-4c12-80a8-9509a3cb90f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.908837370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d990601d-cb94-4c12-80a8-9509a3cb90f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.909315933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa
8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9
ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string
{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6
773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d990601d-cb94-4c12-80a8-9509a3cb90f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.958842826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc8d3c60-8a90-4cd2-9c25-4488cebc7986 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.958926169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc8d3c60-8a90-4cd2-9c25-4488cebc7986 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.960174369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ef70e95-eb14-481f-9554-d1d73f3b5bbe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.960619633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557850960593933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ef70e95-eb14-481f-9554-d1d73f3b5bbe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.961100682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1319669-6bb7-407a-891c-b6316db7bc39 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.961177922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1319669-6bb7-407a-891c-b6316db7bc39 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:30 ha-423356 crio[4115]: time="2024-04-19 20:17:30.961547248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa
8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9
ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string
{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6
773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1319669-6bb7-407a-891c-b6316db7bc39 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.009389626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e91be3b-eba2-4848-a71b-89b588ae500e name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.009465230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e91be3b-eba2-4848-a71b-89b588ae500e name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.010437611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3b477f1-1f3f-442d-b396-236234d9bc6c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.010842638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557851010820305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3b477f1-1f3f-442d-b396-236234d9bc6c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.011473680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23c9d243-42d0-49d1-be8e-76eae0c9c905 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.011529847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23c9d243-42d0-49d1-be8e-76eae0c9c905 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.011913822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa
8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9
ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string
{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6
773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23c9d243-42d0-49d1-be8e-76eae0c9c905 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.059407345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=edf8e7ca-f3df-43cc-aa95-fb4c3c0d4289 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.059487413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edf8e7ca-f3df-43cc-aa95-fb4c3c0d4289 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.061016859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66dc71ef-87bf-47fb-8d98-dd72725d3eb5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.061507011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713557851061479460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66dc71ef-87bf-47fb-8d98-dd72725d3eb5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.062333890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db1bdc4f-2b89-43db-9640-8bf16af8e389 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.062393707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db1bdc4f-2b89-43db-9640-8bf16af8e389 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:17:31 ha-423356 crio[4115]: time="2024-04-19 20:17:31.062845138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa
8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9
ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string
{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6
773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0b
f559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a18
0eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db1bdc4f-2b89-43db-9640-8bf16af8e389 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0536a8eca2340       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   8feea75542997       kube-controller-manager-ha-423356
	91b7d5d464a5c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   93cf72df1c144       kindnet-bqwfr
	3f764732cb42d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   1412ac6cbba35       kube-apiserver-ha-423356
	3f45c0debfb51       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   4ce098ab55cd3       busybox-fc5497c4f-wqfc4
	7324bf6c7d4d2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   895e16416b862       kube-vip-ha-423356
	81c24d896b86f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   b9bd34a0c38f4       kube-proxy-chd2r
	e67b63d64b788       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0dd6b9b5f2ae8       coredns-7db6d8ff4d-rr7zk
	51ec7d0458ebe       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   a4b7516b1af9c       kube-scheduler-ha-423356
	331f89f692a2d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   6d9c82ce1c2c0       coredns-7db6d8ff4d-9td9f
	31e5d247baaae       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   1412ac6cbba35       kube-apiserver-ha-423356
	80df63a7dd481       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   ae6f7aaaed4e6       etcd-ha-423356
	483cbd68c3bcc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   8feea75542997       kube-controller-manager-ha-423356
	81b2b256c447c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   bc5be90cc38ed       kindnet-bqwfr
	8933eb68a303d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   3b67c31972de6       storage-provisioner
	3b80b69bd108f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   027b57294cfbd       busybox-fc5497c4f-wqfc4
	dcfa7c435542c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   14c798e2b76b0       coredns-7db6d8ff4d-9td9f
	2382f52abc364       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   8a34b24c4a7dd       coredns-7db6d8ff4d-rr7zk
	b5377046480e9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   a9af78af7cd87       kube-proxy-chd2r
	7f1baf88d5884       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago       Exited              kube-scheduler            0                   68e93a81da913       kube-scheduler-ha-423356
	1572778d3f528       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   9ba5078b4acef       etcd-ha-423356
	
	
	==> coredns [2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24] <==
	[INFO] 10.244.1.2:34902 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201571s
	[INFO] 10.244.1.2:53225 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001465991s
	[INFO] 10.244.1.2:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258304s
	[INFO] 10.244.1.2:59316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128123s
	[INFO] 10.244.1.2:48977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110722s
	[INFO] 10.244.0.4:40375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793494s
	[INFO] 10.244.0.4:60622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049591s
	[INFO] 10.244.0.4:34038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003778s
	[INFO] 10.244.0.4:51412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043214s
	[INFO] 10.244.0.4:56955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042946s
	[INFO] 10.244.2.2:46864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134976s
	[INFO] 10.244.2.2:34230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011483s
	[INFO] 10.244.1.2:38189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097409s
	[INFO] 10.244.1.2:33041 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080538s
	[INFO] 10.244.0.4:37791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018566s
	[INFO] 10.244.0.4:46485 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061131s
	[INFO] 10.244.0.4:50872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086293s
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142168s
	[INFO] 10.244.1.2:55061 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177752s
	[INFO] 10.244.0.4:44369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008812s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1890&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1878&timeout=9m34s&timeoutSeconds=574&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46366->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46366->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46344->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46344->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1366665584]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Apr-2024 20:15:08.103) (total time: 10199ms):
	Trace[1366665584]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer 10199ms (20:15:18.303)
	Trace[1366665584]: [10.199297468s] [10.199297468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5] <==
	[INFO] 10.244.2.2:49259 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138019s
	[INFO] 10.244.1.2:50375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000377277s
	[INFO] 10.244.1.2:43502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916758s
	[INFO] 10.244.0.4:50440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109012s
	[INFO] 10.244.0.4:50457 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001351323s
	[INFO] 10.244.0.4:57273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119319s
	[INFO] 10.244.2.2:49275 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210181s
	[INFO] 10.244.2.2:41514 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192084s
	[INFO] 10.244.1.2:56219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000465859s
	[INFO] 10.244.1.2:60572 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114905s
	[INFO] 10.244.0.4:52874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098566s
	[INFO] 10.244.2.2:47734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249839s
	[INFO] 10.244.2.2:50981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179648s
	[INFO] 10.244.2.2:34738 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109005s
	[INFO] 10.244.1.2:37966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181053s
	[INFO] 10.244.1.2:48636 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116821s
	[INFO] 10.244.1.2:52580 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000260337s
	[INFO] 10.244.0.4:43327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088111s
	[INFO] 10.244.0.4:47823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105899s
	[INFO] 10.244.0.4:41223 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050192s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1183407982]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Apr-2024 20:15:05.264) (total time: 13039ms):
	Trace[1183407982]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer 13039ms (20:15:18.304)
	Trace[1183407982]: [13.039743899s] [13.039743899s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58100->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58100->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:15:39 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:15:39 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:15:39 +0000   Fri, 19 Apr 2024 20:03:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:15:39 +0000   Fri, 19 Apr 2024 20:03:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-423356
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133e52820e114c7aa16933b82eb1ac6a
	  System UUID:                133e5282-0e11-4c7a-a169-33b82eb1ac6a
	  Boot ID:                    752cc004-2412-44ee-9782-2d20c1c3993d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqfc4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-9td9f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-rr7zk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-423356                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-bqwfr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-423356             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-423356    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-chd2r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-423356             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-423356                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-423356 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-423356 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-423356 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-423356 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Warning  ContainerGCFailed        2m47s (x2 over 3m47s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           101s                   node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           101s                   node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	
	
	Name:               ha-423356-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:17:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-423356-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 346b871eba5f43789a16ce3dbbb4ec2c
	  System UUID:                346b871e-ba5f-4378-9a16-ce3dbbb4ec2c
	  Boot ID:                    7489ab85-d407-430f-8104-10a2700c6b0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fq5c2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-423356-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7ktc2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-423356-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-423356-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-d56ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-423356-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-423356-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 103s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  NodeNotReady             9m5s                   node-controller  Node ha-423356-m02 status is now: NodeNotReady
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m15s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m15s)  kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m15s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           101s                   node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           101s                   node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           25s                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	
	
	Name:               ha-423356-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_06_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:06:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:17:04 +0000   Fri, 19 Apr 2024 20:16:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:17:04 +0000   Fri, 19 Apr 2024 20:16:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:17:04 +0000   Fri, 19 Apr 2024 20:16:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:17:04 +0000   Fri, 19 Apr 2024 20:16:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-423356-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98c76a7ef5ce4a80bed88d9102770ac6
	  System UUID:                98c76a7e-f5ce-4a80-bed8-8d9102770ac6
	  Boot ID:                    b5852c1b-6247-4846-b686-e3118b0e45fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4t8f9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-423356-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-fkd5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-423356-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-423356-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-sr4gd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-423356-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-423356-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-423356-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal   RegisteredNode           101s               node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal   RegisteredNode           101s               node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	  Normal   NodeNotReady             61s                node-controller  Node ha-423356-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s (x3 over 57s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x3 over 57s)  kubelet          Node ha-423356-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x3 over 57s)  kubelet          Node ha-423356-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s (x2 over 57s)  kubelet          Node ha-423356-m03 has been rebooted, boot id: b5852c1b-6247-4846-b686-e3118b0e45fd
	  Normal   NodeReady                57s (x2 over 57s)  kubelet          Node ha-423356-m03 status is now: NodeReady
	  Normal   RegisteredNode           25s                node-controller  Node ha-423356-m03 event: Registered Node ha-423356-m03 in Controller
	
	
	Name:               ha-423356-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:07:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:17:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:17:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:17:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:17:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:17:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-423356-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f1d7a6307945baa5aa5c71ec020b88
	  System UUID:                22f1d7a6-3079-45ba-a5aa-5c71ec020b88
	  Boot ID:                    99ccda9d-ec57-499a-a554-7417a225d5a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wj85m       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-7x69m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-423356-m04 status is now: NodeReady
	  Normal   RegisteredNode           101s               node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           101s               node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   NodeNotReady             61s                node-controller  Node ha-423356-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           25s                node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-423356-m04 has been rebooted, boot id: 99ccda9d-ec57-499a-a554-7417a225d5a2
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-423356-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.106083] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.064815] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057768] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.176439] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.158804] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285884] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.425458] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.068806] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.328517] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.914724] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.592182] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.083040] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.869346] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:04] kauditd_printk_skb: 76 callbacks suppressed
	[Apr19 20:11] kauditd_printk_skb: 1 callbacks suppressed
	[Apr19 20:14] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[  +0.188401] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[  +0.278508] systemd-fstab-generator[3988]: Ignoring "noauto" option for root device
	[  +0.189057] systemd-fstab-generator[4035]: Ignoring "noauto" option for root device
	[  +0.334823] systemd-fstab-generator[4085]: Ignoring "noauto" option for root device
	[  +1.129238] systemd-fstab-generator[4341]: Ignoring "noauto" option for root device
	[  +3.559773] kauditd_printk_skb: 236 callbacks suppressed
	[Apr19 20:15] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d] <==
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:13:18.620554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:13:12.228251Z","time spent":"6.392299825s","remote":"127.0.0.1:54206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true "}
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-19T20:13:18.615971Z","caller":"traceutil/trace.go:171","msg":"trace[104768494] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"6.744332203s","start":"2024-04-19T20:13:11.871635Z","end":"2024-04-19T20:13:18.615967Z","steps":["trace[104768494] 'agreement among raft nodes before linearized reading'  (duration: 6.728393718s)"],"step_count":1}
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:13:18.673635Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:13:18.673697Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-19T20:13:18.675273Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bb39151d8411994b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-19T20:13:18.675503Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675585Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675657Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675745Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675901Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.676038Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.676154Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676207Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676254Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676368Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676433Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676586Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676663Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.679746Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-04-19T20:13:18.679937Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-04-19T20:13:18.679974Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-423356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	
	
	==> etcd [80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5] <==
	{"level":"warn","ts":"2024-04-19T20:16:34.260168Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:34.260298Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:36.824169Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:36.824277Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:39.260334Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:39.260451Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:40.826997Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:40.827042Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:44.261156Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:44.261179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:44.830341Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:44.830486Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e763362e070ef6ce","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-19T20:16:47.63989Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:16:47.658519Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"e763362e070ef6ce","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-19T20:16:47.658577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:16:47.678802Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"e763362e070ef6ce","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-19T20:16:47.678924Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:16:47.688003Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:16:47.689462Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:16:47.706845Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.111:45428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-19T20:16:49.261609Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:49.26173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-19T20:16:58.539492Z","caller":"traceutil/trace.go:171","msg":"trace[1123723025] transaction","detail":"{read_only:false; response_revision:2453; number_of_response:1; }","duration":"100.117958ms","start":"2024-04-19T20:16:58.439348Z","end":"2024-04-19T20:16:58.539466Z","steps":["trace[1123723025] 'process raft request'  (duration: 100.005726ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:17:26.171967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.721343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-7x69m\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2024-04-19T20:17:26.172331Z","caller":"traceutil/trace.go:171","msg":"trace[1352967015] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-7x69m; range_end:; response_count:1; response_revision:2542; }","duration":"166.260276ms","start":"2024-04-19T20:17:26.00603Z","end":"2024-04-19T20:17:26.172291Z","steps":["trace[1352967015] 'range keys from in-memory index tree'  (duration: 164.615072ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:31 up 14 min,  0 users,  load average: 0.35, 0.44, 0.27
	Linux ha-423356 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc] <==
	I0419 20:14:51.041032       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 20:14:51.041186       1 main.go:107] hostIP = 192.168.39.7
	podIP = 192.168.39.7
	I0419 20:14:51.041415       1 main.go:116] setting mtu 1500 for CNI 
	I0419 20:14:51.041460       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 20:14:51.041484       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	
	==> kindnet [91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd] <==
	I0419 20:16:58.071030       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:17:08.080511       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:17:08.080557       1 main.go:227] handling current node
	I0419 20:17:08.080570       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:17:08.080576       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:17:08.080736       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:17:08.080774       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:17:08.080900       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:17:08.080931       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:17:18.089713       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:17:18.089952       1 main.go:227] handling current node
	I0419 20:17:18.090027       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:17:18.090135       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:17:18.090415       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:17:18.090462       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:17:18.090527       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:17:18.090546       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:17:28.105912       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:17:28.106024       1 main.go:227] handling current node
	I0419 20:17:28.106127       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:17:28.106161       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:17:28.106300       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0419 20:17:28.106323       1 main.go:250] Node ha-423356-m03 has CIDR [10.244.2.0/24] 
	I0419 20:17:28.106376       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:17:28.106395       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40] <==
	I0419 20:14:53.587770       1 options.go:221] external host was not specified, using 192.168.39.7
	I0419 20:14:53.592718       1 server.go:148] Version: v1.30.0
	I0419 20:14:53.592782       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:14:54.519103       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 20:14:54.522042       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 20:14:54.522126       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 20:14:54.522205       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:14:54.522275       1 instance.go:299] Using reconciler: lease
	W0419 20:15:14.515689       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0419 20:15:14.515900       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0419 20:15:14.523309       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7] <==
	I0419 20:15:37.836276       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 20:15:37.832800       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 20:15:37.889462       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 20:15:37.889504       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 20:15:37.941184       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:15:37.941228       1 policy_source.go:224] refreshing policies
	I0419 20:15:37.962482       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:15:37.989951       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 20:15:37.990135       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:15:37.990152       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:15:37.990159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:15:37.990247       1 cache.go:39] Caches are synced for autoregister controller
	I0419 20:15:38.031429       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 20:15:38.031476       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 20:15:38.031518       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:15:38.031618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 20:15:38.032028       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:15:38.032493       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 20:15:38.038014       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 20:15:38.038390       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0419 20:15:38.047502       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0419 20:15:38.837542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0419 20:15:39.272885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0419 20:15:39.274619       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 20:15:39.282621       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d] <==
	I0419 20:15:50.624796       1 shared_informer.go:320] Caches are synced for disruption
	I0419 20:15:50.656148       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 20:15:50.667346       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 20:15:50.712024       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m02"
	I0419 20:15:50.712601       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m03"
	I0419 20:15:50.712827       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356-m04"
	I0419 20:15:50.712852       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423356"
	I0419 20:15:50.715257       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 20:15:50.849600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="266.382755ms"
	I0419 20:15:50.851566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="268.820724ms"
	I0419 20:15:50.852119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="134.152µs"
	I0419 20:15:50.852250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.523µs"
	I0419 20:15:51.074815       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 20:15:51.074859       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 20:15:51.112678       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 20:16:00.199621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.250752ms"
	I0419 20:16:00.200566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="194.254µs"
	I0419 20:16:10.241575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.034577ms"
	I0419 20:16:10.241753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="98.979µs"
	I0419 20:16:30.894512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.76766ms"
	I0419 20:16:30.894861       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.613µs"
	I0419 20:16:35.289387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.315µs"
	I0419 20:16:52.389411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.701626ms"
	I0419 20:16:52.390026       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="219.79µs"
	I0419 20:17:22.738683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423356-m04"
	
	
	==> kube-controller-manager [483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d] <==
	I0419 20:14:54.659457       1 serving.go:380] Generated self-signed cert in-memory
	I0419 20:14:55.115274       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 20:14:55.115316       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:14:55.116893       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 20:14:55.117130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:14:55.117235       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 20:14:55.117441       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0419 20:15:15.529626       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.7:8443/healthz\": dial tcp 192.168.39.7:8443: connect: connection refused"
	
	
	==> kube-proxy [81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad] <==
	I0419 20:14:55.154589       1 server_linux.go:69] "Using iptables proxy"
	E0419 20:14:56.798758       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:14:59.872420       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:02.943728       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:09.086602       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:21.377509       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0419 20:15:40.251977       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0419 20:15:40.297024       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:15:40.297150       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:15:40.297169       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:15:40.299910       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:15:40.300329       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:15:40.300384       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:15:40.301849       1 config.go:192] "Starting service config controller"
	I0419 20:15:40.301931       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:15:40.301981       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:15:40.301999       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:15:40.302727       1 config.go:319] "Starting node config controller"
	I0419 20:15:40.309815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:15:40.309865       1 shared_informer.go:320] Caches are synced for node config
	I0419 20:15:40.403013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:15:40.403190       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573] <==
	E0419 20:12:04.767495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.983879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.983945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.984012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.984130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.984843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.984966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:23.199615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:23.200013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:26.271600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:26.271945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:26.272028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:26.272152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:44.703558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:44.703623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:47.775703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:47.775802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:50.847775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:50.847853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e] <==
	W0419 20:15:31.723340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0419 20:15:31.723486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	W0419 20:15:37.896825       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:15:37.897556       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:15:37.937544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:15:37.937647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:15:37.937819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 20:15:37.937864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 20:15:37.938041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0419 20:15:37.938155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0419 20:15:37.938255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:15:37.938288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:15:37.938370       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 20:15:37.940137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 20:15:37.940281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 20:15:37.940335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 20:15:37.940415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:15:37.940448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:15:37.940516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:15:37.940602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:15:37.940728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:15:37.940790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 20:15:37.940924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0419 20:15:37.940986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 20:16:03.148334       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861] <==
	W0419 20:13:16.545364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:13:16.545441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:13:16.565907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 20:13:16.565972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0419 20:13:16.575881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 20:13:16.576149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 20:13:16.588440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:13:16.588573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:13:16.615131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:13:16.616748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:13:16.616629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 20:13:16.617349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 20:13:16.963270       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:13:16.963394       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:13:17.078162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.078335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0419 20:13:17.210790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:13:17.210869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:13:17.334291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0419 20:13:17.334412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0419 20:13:17.360234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.360327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 20:13:17.443256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.443510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:18.559682       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 19 20:15:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:15:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:15:51 ha-423356 kubelet[1380]: I0419 20:15:51.646934    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:15:51 ha-423356 kubelet[1380]: E0419 20:15:51.647945    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:16:02 ha-423356 kubelet[1380]: I0419 20:16:02.647386    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:16:02 ha-423356 kubelet[1380]: E0419 20:16:02.647603    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:16:14 ha-423356 kubelet[1380]: I0419 20:16:14.646935    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:16:14 ha-423356 kubelet[1380]: E0419 20:16:14.647261    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:16:27 ha-423356 kubelet[1380]: I0419 20:16:27.647529    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:16:27 ha-423356 kubelet[1380]: E0419 20:16:27.647875    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:16:33 ha-423356 kubelet[1380]: I0419 20:16:33.646833    1380 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-423356" podUID="4385b850-a4b2-4f21-acf1-3d720198e1c2"
	Apr 19 20:16:33 ha-423356 kubelet[1380]: I0419 20:16:33.665622    1380 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-423356"
	Apr 19 20:16:42 ha-423356 kubelet[1380]: I0419 20:16:42.648007    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:16:42 ha-423356 kubelet[1380]: E0419 20:16:42.648900    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:16:44 ha-423356 kubelet[1380]: E0419 20:16:44.669626    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:16:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:16:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:16:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:16:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:16:54 ha-423356 kubelet[1380]: I0419 20:16:54.648295    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:16:54 ha-423356 kubelet[1380]: E0419 20:16:54.648768    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:17:06 ha-423356 kubelet[1380]: I0419 20:17:06.649767    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:17:06 ha-423356 kubelet[1380]: E0419 20:17:06.650187    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	Apr 19 20:17:17 ha-423356 kubelet[1380]: I0419 20:17:17.647316    1380 scope.go:117] "RemoveContainer" containerID="8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	Apr 19 20:17:17 ha-423356 kubelet[1380]: E0419 20:17:17.648118    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(956e5c6c-de0e-4f78-9151-d456dc732bdd)\"" pod="kube-system/storage-provisioner" podUID="956e5c6c-de0e-4f78-9151-d456dc732bdd"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:17:30.575135  396504 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-423356 -n ha-423356
helpers_test.go:261: (dbg) Run:  kubectl --context ha-423356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 stop -v=7 --alsologtostderr: exit status 82 (2m0.497158418s)

                                                
                                                
-- stdout --
	* Stopping node "ha-423356-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:17:50.867699  397364 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:17:50.867885  397364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:17:50.867897  397364 out.go:304] Setting ErrFile to fd 2...
	I0419 20:17:50.867902  397364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:17:50.868090  397364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:17:50.868354  397364 out.go:298] Setting JSON to false
	I0419 20:17:50.868433  397364 mustload.go:65] Loading cluster: ha-423356
	I0419 20:17:50.868832  397364 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:17:50.868936  397364 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:17:50.869115  397364 mustload.go:65] Loading cluster: ha-423356
	I0419 20:17:50.869268  397364 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:17:50.869307  397364 stop.go:39] StopHost: ha-423356-m04
	I0419 20:17:50.869722  397364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:17:50.869771  397364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:17:50.885041  397364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45919
	I0419 20:17:50.885637  397364 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:17:50.886293  397364 main.go:141] libmachine: Using API Version  1
	I0419 20:17:50.886326  397364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:17:50.886694  397364 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:17:50.889169  397364 out.go:177] * Stopping node "ha-423356-m04"  ...
	I0419 20:17:50.890775  397364 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0419 20:17:50.890816  397364 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:17:50.891082  397364 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0419 20:17:50.891111  397364 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:17:50.894656  397364 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:17:50.895096  397364 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:17:17 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:17:50.895125  397364 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:17:50.895298  397364 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:17:50.895505  397364 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:17:50.895651  397364 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:17:50.895768  397364 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	I0419 20:17:50.980896  397364 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0419 20:17:51.035848  397364 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0419 20:17:51.090341  397364 main.go:141] libmachine: Stopping "ha-423356-m04"...
	I0419 20:17:51.090384  397364 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:17:51.092082  397364 main.go:141] libmachine: (ha-423356-m04) Calling .Stop
	I0419 20:17:51.095730  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 0/120
	I0419 20:17:52.097218  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 1/120
	I0419 20:17:53.099256  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 2/120
	I0419 20:17:54.101223  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 3/120
	I0419 20:17:55.103525  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 4/120
	I0419 20:17:56.105581  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 5/120
	I0419 20:17:57.107138  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 6/120
	I0419 20:17:58.108389  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 7/120
	I0419 20:17:59.110516  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 8/120
	I0419 20:18:00.111781  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 9/120
	I0419 20:18:01.113272  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 10/120
	I0419 20:18:02.114718  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 11/120
	I0419 20:18:03.116122  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 12/120
	I0419 20:18:04.117447  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 13/120
	I0419 20:18:05.118829  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 14/120
	I0419 20:18:06.120973  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 15/120
	I0419 20:18:07.123248  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 16/120
	I0419 20:18:08.124766  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 17/120
	I0419 20:18:09.126075  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 18/120
	I0419 20:18:10.127817  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 19/120
	I0419 20:18:11.130187  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 20/120
	I0419 20:18:12.131486  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 21/120
	I0419 20:18:13.132856  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 22/120
	I0419 20:18:14.135159  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 23/120
	I0419 20:18:15.136798  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 24/120
	I0419 20:18:16.138853  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 25/120
	I0419 20:18:17.140300  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 26/120
	I0419 20:18:18.141599  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 27/120
	I0419 20:18:19.142799  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 28/120
	I0419 20:18:20.144027  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 29/120
	I0419 20:18:21.146128  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 30/120
	I0419 20:18:22.147358  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 31/120
	I0419 20:18:23.148742  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 32/120
	I0419 20:18:24.150103  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 33/120
	I0419 20:18:25.151525  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 34/120
	I0419 20:18:26.153440  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 35/120
	I0419 20:18:27.155036  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 36/120
	I0419 20:18:28.156493  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 37/120
	I0419 20:18:29.157788  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 38/120
	I0419 20:18:30.159037  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 39/120
	I0419 20:18:31.161265  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 40/120
	I0419 20:18:32.162842  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 41/120
	I0419 20:18:33.164261  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 42/120
	I0419 20:18:34.165777  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 43/120
	I0419 20:18:35.167466  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 44/120
	I0419 20:18:36.169202  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 45/120
	I0419 20:18:37.170531  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 46/120
	I0419 20:18:38.172038  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 47/120
	I0419 20:18:39.173532  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 48/120
	I0419 20:18:40.175396  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 49/120
	I0419 20:18:41.177674  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 50/120
	I0419 20:18:42.179293  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 51/120
	I0419 20:18:43.180693  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 52/120
	I0419 20:18:44.182199  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 53/120
	I0419 20:18:45.183459  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 54/120
	I0419 20:18:46.185583  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 55/120
	I0419 20:18:47.187002  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 56/120
	I0419 20:18:48.188550  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 57/120
	I0419 20:18:49.190003  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 58/120
	I0419 20:18:50.191544  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 59/120
	I0419 20:18:51.193877  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 60/120
	I0419 20:18:52.195922  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 61/120
	I0419 20:18:53.197451  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 62/120
	I0419 20:18:54.199005  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 63/120
	I0419 20:18:55.200524  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 64/120
	I0419 20:18:56.202455  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 65/120
	I0419 20:18:57.204050  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 66/120
	I0419 20:18:58.205327  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 67/120
	I0419 20:18:59.206842  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 68/120
	I0419 20:19:00.208530  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 69/120
	I0419 20:19:01.210667  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 70/120
	I0419 20:19:02.212261  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 71/120
	I0419 20:19:03.213754  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 72/120
	I0419 20:19:04.215496  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 73/120
	I0419 20:19:05.217038  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 74/120
	I0419 20:19:06.218714  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 75/120
	I0419 20:19:07.220161  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 76/120
	I0419 20:19:08.221499  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 77/120
	I0419 20:19:09.222932  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 78/120
	I0419 20:19:10.224521  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 79/120
	I0419 20:19:11.226155  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 80/120
	I0419 20:19:12.227755  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 81/120
	I0419 20:19:13.229097  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 82/120
	I0419 20:19:14.230336  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 83/120
	I0419 20:19:15.231667  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 84/120
	I0419 20:19:16.233772  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 85/120
	I0419 20:19:17.235319  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 86/120
	I0419 20:19:18.236715  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 87/120
	I0419 20:19:19.238305  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 88/120
	I0419 20:19:20.239513  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 89/120
	I0419 20:19:21.241684  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 90/120
	I0419 20:19:22.243155  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 91/120
	I0419 20:19:23.244682  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 92/120
	I0419 20:19:24.246198  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 93/120
	I0419 20:19:25.247663  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 94/120
	I0419 20:19:26.249559  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 95/120
	I0419 20:19:27.251404  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 96/120
	I0419 20:19:28.252971  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 97/120
	I0419 20:19:29.255129  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 98/120
	I0419 20:19:30.256762  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 99/120
	I0419 20:19:31.258898  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 100/120
	I0419 20:19:32.260524  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 101/120
	I0419 20:19:33.262075  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 102/120
	I0419 20:19:34.263504  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 103/120
	I0419 20:19:35.265222  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 104/120
	I0419 20:19:36.267439  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 105/120
	I0419 20:19:37.268704  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 106/120
	I0419 20:19:38.270985  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 107/120
	I0419 20:19:39.272345  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 108/120
	I0419 20:19:40.273774  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 109/120
	I0419 20:19:41.275722  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 110/120
	I0419 20:19:42.277180  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 111/120
	I0419 20:19:43.279412  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 112/120
	I0419 20:19:44.280854  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 113/120
	I0419 20:19:45.283065  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 114/120
	I0419 20:19:46.285080  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 115/120
	I0419 20:19:47.287338  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 116/120
	I0419 20:19:48.289278  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 117/120
	I0419 20:19:49.291158  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 118/120
	I0419 20:19:50.293502  397364 main.go:141] libmachine: (ha-423356-m04) Waiting for machine to stop 119/120
	I0419 20:19:51.294684  397364 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0419 20:19:51.294757  397364 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0419 20:19:51.297114  397364 out.go:177] 
	W0419 20:19:51.298759  397364 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0419 20:19:51.298781  397364 out.go:239] * 
	* 
	W0419 20:19:51.301800  397364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 20:19:51.303421  397364 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-423356 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr: exit status 3 (19.035100749s)

                                                
                                                
-- stdout --
	ha-423356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423356-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:19:51.369594  397800 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:19:51.369714  397800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:19:51.369727  397800 out.go:304] Setting ErrFile to fd 2...
	I0419 20:19:51.369732  397800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:19:51.369931  397800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:19:51.370125  397800 out.go:298] Setting JSON to false
	I0419 20:19:51.370154  397800 mustload.go:65] Loading cluster: ha-423356
	I0419 20:19:51.370201  397800 notify.go:220] Checking for updates...
	I0419 20:19:51.370596  397800 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:19:51.370616  397800 status.go:255] checking status of ha-423356 ...
	I0419 20:19:51.371022  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.371088  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.391472  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0419 20:19:51.392157  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.392754  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.392796  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.393287  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.393544  397800 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:19:51.395181  397800 status.go:330] ha-423356 host status = "Running" (err=<nil>)
	I0419 20:19:51.395210  397800 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:19:51.395543  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.395612  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.410979  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I0419 20:19:51.411453  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.412098  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.412120  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.412507  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.412737  397800 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:19:51.416213  397800 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:19:51.416682  397800 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:19:51.416722  397800 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:19:51.416887  397800 host.go:66] Checking if "ha-423356" exists ...
	I0419 20:19:51.417254  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.417313  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.432316  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0419 20:19:51.432741  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.433256  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.433280  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.433667  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.433910  397800 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:19:51.434120  397800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:19:51.434165  397800 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:19:51.437266  397800 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:19:51.437739  397800 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:19:51.437779  397800 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:19:51.437912  397800 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:19:51.438089  397800 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:19:51.438252  397800 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:19:51.438390  397800 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:19:51.526784  397800 ssh_runner.go:195] Run: systemctl --version
	I0419 20:19:51.534736  397800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:19:51.554383  397800 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:19:51.554414  397800 api_server.go:166] Checking apiserver status ...
	I0419 20:19:51.554458  397800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:19:51.571813  397800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5350/cgroup
	W0419 20:19:51.583644  397800 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5350/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:19:51.583713  397800 ssh_runner.go:195] Run: ls
	I0419 20:19:51.591829  397800 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:19:51.600408  397800 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:19:51.600443  397800 status.go:422] ha-423356 apiserver status = Running (err=<nil>)
	I0419 20:19:51.600457  397800 status.go:257] ha-423356 status: &{Name:ha-423356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:19:51.600480  397800 status.go:255] checking status of ha-423356-m02 ...
	I0419 20:19:51.600898  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.600948  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.616408  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0419 20:19:51.616981  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.617523  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.617548  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.617851  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.618026  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetState
	I0419 20:19:51.619560  397800 status.go:330] ha-423356-m02 host status = "Running" (err=<nil>)
	I0419 20:19:51.619575  397800 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:19:51.619904  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.619954  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.636018  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0419 20:19:51.636432  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.636976  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.637000  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.637420  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.637614  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetIP
	I0419 20:19:51.640349  397800 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:19:51.640775  397800 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:15:05 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:19:51.640812  397800 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:19:51.640956  397800 host.go:66] Checking if "ha-423356-m02" exists ...
	I0419 20:19:51.641299  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.641357  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.656098  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0419 20:19:51.656599  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.657174  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.657210  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.657597  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.657791  397800 main.go:141] libmachine: (ha-423356-m02) Calling .DriverName
	I0419 20:19:51.658011  397800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:19:51.658034  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHHostname
	I0419 20:19:51.660570  397800 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:19:51.661071  397800 main.go:141] libmachine: (ha-423356-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:9f:96", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:15:05 +0000 UTC Type:0 Mac:52:54:00:1e:9f:96 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-423356-m02 Clientid:01:52:54:00:1e:9f:96}
	I0419 20:19:51.661102  397800 main.go:141] libmachine: (ha-423356-m02) DBG | domain ha-423356-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:1e:9f:96 in network mk-ha-423356
	I0419 20:19:51.661213  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHPort
	I0419 20:19:51.661407  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHKeyPath
	I0419 20:19:51.661595  397800 main.go:141] libmachine: (ha-423356-m02) Calling .GetSSHUsername
	I0419 20:19:51.661748  397800 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m02/id_rsa Username:docker}
	I0419 20:19:51.750689  397800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:19:51.769735  397800 kubeconfig.go:125] found "ha-423356" server: "https://192.168.39.254:8443"
	I0419 20:19:51.769784  397800 api_server.go:166] Checking apiserver status ...
	I0419 20:19:51.769829  397800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:19:51.787216  397800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	W0419 20:19:51.800748  397800 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:19:51.800818  397800 ssh_runner.go:195] Run: ls
	I0419 20:19:51.806149  397800 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0419 20:19:51.810612  397800 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0419 20:19:51.810644  397800 status.go:422] ha-423356-m02 apiserver status = Running (err=<nil>)
	I0419 20:19:51.810657  397800 status.go:257] ha-423356-m02 status: &{Name:ha-423356-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:19:51.810683  397800 status.go:255] checking status of ha-423356-m04 ...
	I0419 20:19:51.811027  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.811079  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.827868  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0419 20:19:51.828419  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.829014  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.829044  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.829404  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.829620  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetState
	I0419 20:19:51.831193  397800 status.go:330] ha-423356-m04 host status = "Running" (err=<nil>)
	I0419 20:19:51.831214  397800 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:19:51.831570  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.831624  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.847534  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36369
	I0419 20:19:51.847963  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.848479  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.848510  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.848902  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.849111  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetIP
	I0419 20:19:51.851994  397800 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:19:51.852468  397800 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:17:17 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:19:51.852504  397800 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:19:51.852604  397800 host.go:66] Checking if "ha-423356-m04" exists ...
	I0419 20:19:51.852944  397800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:19:51.852982  397800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:19:51.868499  397800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0419 20:19:51.868963  397800 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:19:51.869442  397800 main.go:141] libmachine: Using API Version  1
	I0419 20:19:51.869464  397800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:19:51.869774  397800 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:19:51.869943  397800 main.go:141] libmachine: (ha-423356-m04) Calling .DriverName
	I0419 20:19:51.870136  397800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:19:51.870157  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHHostname
	I0419 20:19:51.872671  397800 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:19:51.873215  397800 main.go:141] libmachine: (ha-423356-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b0:35", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:17:17 +0000 UTC Type:0 Mac:52:54:00:4f:b0:35 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-423356-m04 Clientid:01:52:54:00:4f:b0:35}
	I0419 20:19:51.873245  397800 main.go:141] libmachine: (ha-423356-m04) DBG | domain ha-423356-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:4f:b0:35 in network mk-ha-423356
	I0419 20:19:51.873406  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHPort
	I0419 20:19:51.873607  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHKeyPath
	I0419 20:19:51.873754  397800 main.go:141] libmachine: (ha-423356-m04) Calling .GetSSHUsername
	I0419 20:19:51.873888  397800 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356-m04/id_rsa Username:docker}
	W0419 20:20:10.340848  397800 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.44:22: connect: no route to host
	W0419 20:20:10.341055  397800 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host
	E0419 20:20:10.341084  397800 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host
	I0419 20:20:10.341096  397800 status.go:257] ha-423356-m04 status: &{Name:ha-423356-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0419 20:20:10.341117  397800 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-423356 -n ha-423356
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-423356 logs -n 25: (1.814017156s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m04 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp testdata/cp-test.txt                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356:/home/docker/cp-test_ha-423356-m04_ha-423356.txt                       |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356 sudo cat                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356.txt                                 |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m02:/home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m02 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m03:/home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n                                                                 | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | ha-423356-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-423356 ssh -n ha-423356-m03 sudo cat                                          | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC | 19 Apr 24 20:07 UTC |
	|         | /home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-423356 node stop m02 -v=7                                                     | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-423356 node start m02 -v=7                                                    | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-423356 -v=7                                                           | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-423356 -v=7                                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-423356 --wait=true -v=7                                                    | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:13 UTC | 19 Apr 24 20:17 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-423356                                                                | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:17 UTC |                     |
	| node    | ha-423356 node delete m03 -v=7                                                   | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:17 UTC | 19 Apr 24 20:17 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-423356 stop -v=7                                                              | ha-423356 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:13:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:13:17.613989  395150 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:13:17.614253  395150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:13:17.614278  395150 out.go:304] Setting ErrFile to fd 2...
	I0419 20:13:17.614282  395150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:13:17.614467  395150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:13:17.615060  395150 out.go:298] Setting JSON to false
	I0419 20:13:17.616067  395150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6944,"bootTime":1713550654,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:13:17.616138  395150 start.go:139] virtualization: kvm guest
	I0419 20:13:17.618808  395150 out.go:177] * [ha-423356] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:13:17.620839  395150 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:13:17.620818  395150 notify.go:220] Checking for updates...
	I0419 20:13:17.622229  395150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:13:17.623897  395150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:13:17.625359  395150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:13:17.626872  395150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:13:17.628398  395150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:13:17.630607  395150 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:13:17.630763  395150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:13:17.631472  395150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:13:17.631522  395150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:13:17.647414  395150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0419 20:13:17.647804  395150 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:13:17.648406  395150 main.go:141] libmachine: Using API Version  1
	I0419 20:13:17.648433  395150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:13:17.648792  395150 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:13:17.648974  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.687472  395150 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:13:17.688852  395150 start.go:297] selected driver: kvm2
	I0419 20:13:17.688865  395150 start.go:901] validating driver "kvm2" against &{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:13:17.689016  395150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:13:17.689341  395150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:13:17.689422  395150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:13:17.704874  395150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:13:17.705625  395150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:13:17.705682  395150 cni.go:84] Creating CNI manager for ""
	I0419 20:13:17.705694  395150 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0419 20:13:17.705759  395150 start.go:340] cluster config:
	{Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:13:17.705887  395150 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:13:17.707833  395150 out.go:177] * Starting "ha-423356" primary control-plane node in "ha-423356" cluster
	I0419 20:13:17.709237  395150 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:13:17.709271  395150 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:13:17.709282  395150 cache.go:56] Caching tarball of preloaded images
	I0419 20:13:17.709401  395150 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:13:17.709414  395150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:13:17.709535  395150 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/config.json ...
	I0419 20:13:17.709724  395150 start.go:360] acquireMachinesLock for ha-423356: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:13:17.709769  395150 start.go:364] duration metric: took 25.519µs to acquireMachinesLock for "ha-423356"
	I0419 20:13:17.709805  395150 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:13:17.709813  395150 fix.go:54] fixHost starting: 
	I0419 20:13:17.710073  395150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:13:17.710101  395150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:13:17.725270  395150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0419 20:13:17.725775  395150 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:13:17.726349  395150 main.go:141] libmachine: Using API Version  1
	I0419 20:13:17.726374  395150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:13:17.726692  395150 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:13:17.726928  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.727076  395150 main.go:141] libmachine: (ha-423356) Calling .GetState
	I0419 20:13:17.728870  395150 fix.go:112] recreateIfNeeded on ha-423356: state=Running err=<nil>
	W0419 20:13:17.728903  395150 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:13:17.730906  395150 out.go:177] * Updating the running kvm2 "ha-423356" VM ...
	I0419 20:13:17.731983  395150 machine.go:94] provisionDockerMachine start ...
	I0419 20:13:17.732000  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:13:17.732198  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.734753  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.735162  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.735181  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.735396  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.735630  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.735877  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.736052  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.736283  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.736494  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.736513  395150 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:13:17.846439  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:13:17.846483  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:17.846793  395150 buildroot.go:166] provisioning hostname "ha-423356"
	I0419 20:13:17.846825  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:17.847027  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.850089  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.850538  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.850568  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.850725  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.850918  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.851115  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.851287  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.851501  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.851679  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.851692  395150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-423356 && echo "ha-423356" | sudo tee /etc/hostname
	I0419 20:13:17.971434  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-423356
	
	I0419 20:13:17.971469  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:17.974335  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.974720  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:17.974751  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:17.974903  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:17.975101  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.975268  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:17.975386  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:17.975594  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:17.975763  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:17.975778  395150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:13:18.077962  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:13:18.077998  395150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:13:18.078040  395150 buildroot.go:174] setting up certificates
	I0419 20:13:18.078055  395150 provision.go:84] configureAuth start
	I0419 20:13:18.078070  395150 main.go:141] libmachine: (ha-423356) Calling .GetMachineName
	I0419 20:13:18.078380  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:13:18.081559  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.081975  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.082015  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.082129  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.084451  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.084779  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.084799  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.084990  395150 provision.go:143] copyHostCerts
	I0419 20:13:18.085034  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:13:18.085073  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:13:18.085082  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:13:18.085148  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:13:18.085234  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:13:18.085251  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:13:18.085258  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:13:18.085280  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:13:18.085339  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:13:18.085361  395150 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:13:18.085368  395150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:13:18.085388  395150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:13:18.085493  395150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.ha-423356 san=[127.0.0.1 192.168.39.7 ha-423356 localhost minikube]
	I0419 20:13:18.273047  395150 provision.go:177] copyRemoteCerts
	I0419 20:13:18.273109  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:13:18.273136  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.275922  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.276222  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.276250  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.276434  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:18.276629  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.276795  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:18.276910  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:13:18.361571  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:13:18.361677  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:13:18.390997  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:13:18.391108  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0419 20:13:18.417953  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:13:18.418063  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:13:18.444341  395150 provision.go:87] duration metric: took 366.268199ms to configureAuth
	I0419 20:13:18.444383  395150 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:13:18.444604  395150 config.go:182] Loaded profile config "ha-423356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:13:18.444720  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:13:18.447494  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.448012  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:13:18.448050  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:13:18.448196  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:13:18.448416  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.448620  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:13:18.448805  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:13:18.448997  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:13:18.449166  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:13:18.449190  395150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:14:49.395851  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:14:49.395886  395150 machine.go:97] duration metric: took 1m31.663890211s to provisionDockerMachine
	I0419 20:14:49.395900  395150 start.go:293] postStartSetup for "ha-423356" (driver="kvm2")
	I0419 20:14:49.395915  395150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:14:49.395943  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.396285  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:14:49.396314  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.399391  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.399927  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.399958  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.400109  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.400318  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.400473  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.400594  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.485096  395150 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:14:49.489526  395150 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:14:49.489550  395150 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:14:49.489607  395150 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:14:49.489686  395150 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:14:49.489699  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:14:49.489787  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:14:49.499424  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:14:49.525785  395150 start.go:296] duration metric: took 129.868394ms for postStartSetup
	I0419 20:14:49.525835  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.526186  395150 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0419 20:14:49.526223  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.528989  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.529393  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.529423  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.529561  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.529766  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.529956  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.530100  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	W0419 20:14:49.612117  395150 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0419 20:14:49.612148  395150 fix.go:56] duration metric: took 1m31.902335238s for fixHost
	I0419 20:14:49.612172  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.614888  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.615268  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.615290  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.615494  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.615692  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.615925  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.616084  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.616275  395150 main.go:141] libmachine: Using SSH client type: native
	I0419 20:14:49.616451  395150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0419 20:14:49.616467  395150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:14:49.717843  395150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713557689.687243078
	
	I0419 20:14:49.717871  395150 fix.go:216] guest clock: 1713557689.687243078
	I0419 20:14:49.717880  395150 fix.go:229] Guest: 2024-04-19 20:14:49.687243078 +0000 UTC Remote: 2024-04-19 20:14:49.61215584 +0000 UTC m=+92.049900018 (delta=75.087238ms)
	I0419 20:14:49.717910  395150 fix.go:200] guest clock delta is within tolerance: 75.087238ms
	I0419 20:14:49.717919  395150 start.go:83] releasing machines lock for "ha-423356", held for 1m32.008139997s
	I0419 20:14:49.717974  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.718318  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:14:49.721098  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.721516  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.721540  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.721701  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722383  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722592  395150 main.go:141] libmachine: (ha-423356) Calling .DriverName
	I0419 20:14:49.722715  395150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:14:49.722757  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.722808  395150 ssh_runner.go:195] Run: cat /version.json
	I0419 20:14:49.722836  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHHostname
	I0419 20:14:49.725411  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.725713  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.725885  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.725908  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.726104  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.726140  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:49.726161  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:49.726278  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.726332  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHPort
	I0419 20:14:49.726428  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.726446  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHKeyPath
	I0419 20:14:49.726619  395150 main.go:141] libmachine: (ha-423356) Calling .GetSSHUsername
	I0419 20:14:49.726618  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.726744  395150 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/ha-423356/id_rsa Username:docker}
	I0419 20:14:49.840116  395150 ssh_runner.go:195] Run: systemctl --version
	I0419 20:14:49.846482  395150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:14:50.008366  395150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:14:50.016973  395150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:14:50.017059  395150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:14:50.026623  395150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:14:50.026645  395150 start.go:494] detecting cgroup driver to use...
	I0419 20:14:50.026758  395150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:14:50.043780  395150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:14:50.058105  395150 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:14:50.058168  395150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:14:50.072477  395150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:14:50.086680  395150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:14:50.242992  395150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:14:50.477328  395150 docker.go:233] disabling docker service ...
	I0419 20:14:50.477399  395150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:14:50.513473  395150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:14:50.535695  395150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:14:50.722627  395150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:14:50.912059  395150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:14:50.932826  395150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:14:50.953009  395150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:14:50.953094  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.967745  395150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:14:50.967822  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.978856  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:50.990101  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.004684  395150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:14:51.016114  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.027581  395150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.038874  395150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:14:51.050141  395150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:14:51.060450  395150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:14:51.073866  395150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:14:51.230409  395150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:14:51.629615  395150 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:14:51.629701  395150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:14:51.643172  395150 start.go:562] Will wait 60s for crictl version
	I0419 20:14:51.643235  395150 ssh_runner.go:195] Run: which crictl
	I0419 20:14:51.647524  395150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:14:51.687196  395150 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:14:51.687301  395150 ssh_runner.go:195] Run: crio --version
	I0419 20:14:51.717719  395150 ssh_runner.go:195] Run: crio --version
	I0419 20:14:51.752523  395150 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:14:51.754148  395150 main.go:141] libmachine: (ha-423356) Calling .GetIP
	I0419 20:14:51.756956  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:51.757331  395150 main.go:141] libmachine: (ha-423356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:25:62", ip: ""} in network mk-ha-423356: {Iface:virbr1 ExpiryTime:2024-04-19 21:03:18 +0000 UTC Type:0 Mac:52:54:00:aa:25:62 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-423356 Clientid:01:52:54:00:aa:25:62}
	I0419 20:14:51.757364  395150 main.go:141] libmachine: (ha-423356) DBG | domain ha-423356 has defined IP address 192.168.39.7 and MAC address 52:54:00:aa:25:62 in network mk-ha-423356
	I0419 20:14:51.757590  395150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:14:51.762895  395150 kubeadm.go:877] updating cluster {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:14:51.763090  395150 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:14:51.763156  395150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:14:51.813673  395150 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:14:51.813701  395150 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:14:51.813772  395150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:14:51.851351  395150 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:14:51.851378  395150 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:14:51.851387  395150 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.30.0 crio true true} ...
	I0419 20:14:51.851509  395150 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-423356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:14:51.851576  395150 ssh_runner.go:195] Run: crio config
	I0419 20:14:51.904408  395150 cni.go:84] Creating CNI manager for ""
	I0419 20:14:51.904443  395150 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0419 20:14:51.904464  395150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:14:51.904495  395150 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423356 NodeName:ha-423356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:14:51.904716  395150 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423356"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:14:51.904753  395150 kube-vip.go:111] generating kube-vip config ...
	I0419 20:14:51.904815  395150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0419 20:14:51.917550  395150 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0419 20:14:51.917680  395150 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0419 20:14:51.917759  395150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:14:51.928492  395150 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:14:51.928592  395150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0419 20:14:51.939393  395150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0419 20:14:51.957614  395150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:14:51.975134  395150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0419 20:14:52.006858  395150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0419 20:14:52.115125  395150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0419 20:14:52.124578  395150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:14:52.423345  395150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:14:52.469109  395150 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356 for IP: 192.168.39.7
	I0419 20:14:52.469134  395150 certs.go:194] generating shared ca certs ...
	I0419 20:14:52.469150  395150 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.469295  395150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:14:52.469341  395150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:14:52.469351  395150 certs.go:256] generating profile certs ...
	I0419 20:14:52.469417  395150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/client.key
	I0419 20:14:52.469444  395150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a
	I0419 20:14:52.469456  395150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.121 192.168.39.111 192.168.39.254]
	I0419 20:14:52.830008  395150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a ...
	I0419 20:14:52.830049  395150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a: {Name:mk0dc2583e0f7154aa0905cbefab2d5317314ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.830242  395150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a ...
	I0419 20:14:52.830267  395150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a: {Name:mk2defad1ff8d9549d78845d6c6dd19f6514872f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:14:52.830364  395150 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt.48e29c3a -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt
	I0419 20:14:52.830522  395150 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key.48e29c3a -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key
	I0419 20:14:52.830656  395150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key
	I0419 20:14:52.830682  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:14:52.830694  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:14:52.830704  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:14:52.830713  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:14:52.830723  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:14:52.830736  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:14:52.830744  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:14:52.830756  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:14:52.830799  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:14:52.830830  395150 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:14:52.830840  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:14:52.830865  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:14:52.830892  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:14:52.830916  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:14:52.830955  395150 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:14:52.830980  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:14:52.830997  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:52.831017  395150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:14:52.831732  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:14:53.082686  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:14:53.322741  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:14:53.385663  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:14:53.437076  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0419 20:14:53.479819  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 20:14:53.510914  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:14:53.544550  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/ha-423356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:14:53.575010  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:14:53.617993  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:14:53.659021  395150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:14:53.730060  395150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:14:53.789720  395150 ssh_runner.go:195] Run: openssl version
	I0419 20:14:53.797548  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:14:53.813678  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.819424  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.819495  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:14:53.826718  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:14:53.838444  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:14:53.850916  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.863599  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.863670  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:14:53.870587  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:14:53.889682  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:14:53.907227  395150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.915094  395150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.915164  395150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:14:53.928643  395150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:14:53.941191  395150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:14:53.948678  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:14:53.957670  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:14:53.964025  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:14:53.972572  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:14:53.983092  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:14:53.993120  395150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:14:54.001177  395150 kubeadm.go:391] StartCluster: {Name:ha-423356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-423356 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:14:54.001375  395150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:14:54.001453  395150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:14:54.078997  395150 cri.go:89] found id: "e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2"
	I0419 20:14:54.079026  395150 cri.go:89] found id: "51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e"
	I0419 20:14:54.079042  395150 cri.go:89] found id: "331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd"
	I0419 20:14:54.079047  395150 cri.go:89] found id: "31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40"
	I0419 20:14:54.079052  395150 cri.go:89] found id: "80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5"
	I0419 20:14:54.079056  395150 cri.go:89] found id: "483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d"
	I0419 20:14:54.079060  395150 cri.go:89] found id: "81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc"
	I0419 20:14:54.079064  395150 cri.go:89] found id: "8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814"
	I0419 20:14:54.079068  395150 cri.go:89] found id: "95f24d776dec7f41671b86692950532aad53a72dc9d0ebde106a468c54958596"
	I0419 20:14:54.079077  395150 cri.go:89] found id: "c7f33bcee24d50606a5525fcf4daca0f2da2fd97c77a364aec2a6d62d257aacd"
	I0419 20:14:54.079081  395150 cri.go:89] found id: "dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5"
	I0419 20:14:54.079086  395150 cri.go:89] found id: "2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24"
	I0419 20:14:54.079090  395150 cri.go:89] found id: "b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573"
	I0419 20:14:54.079094  395150 cri.go:89] found id: "e7d5dc9bb5064e07c0df8f88ad7c51bb225d8e4fa9f091154c5fd0a2e00a0fad"
	I0419 20:14:54.079101  395150 cri.go:89] found id: "7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861"
	I0419 20:14:54.079109  395150 cri.go:89] found id: "1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d"
	I0419 20:14:54.079113  395150 cri.go:89] found id: ""
	I0419 20:14:54.079175  395150 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.074241533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc660929-1554-4e99-991d-113eb46745fb name=/runtime.v1.RuntimeService/Version
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.075717183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f780c40-610f-4f15-bd7d-3efc12a91946 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.076220324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713558011076195873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f780c40-610f-4f15-bd7d-3efc12a91946 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.077110039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4eec64be-7727-4619-95f0-f86d39735247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.077194127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4eec64be-7727-4619-95f0-f86d39735247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.077575140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:938f0ef7a4d374260f7a793c335a16f00f6c936954e9fcfc99ef6af38cc71dee,PodSandboxId:6c5b939255eda83406ce7d311afd4c343e31b460401b7d27cbb6ade76a16df3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557852662006249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3
bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c1
9,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3
bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubern
etes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernet
es.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4eec64be-7727-4619-95f0-f86d39735247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.083700074Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7dc26ad4-0cd4-4515-858a-2cdbed56195e name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.084802801Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wqfc4,Uid:a361495f-5d84-4133-b206-4a42fb8ba66d,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557725811279506,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:06:36.307200609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-423356,Uid:211f18431db98436f7615a374702b84d,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1713557706029233105,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{kubernetes.io/config.hash: 211f18431db98436f7615a374702b84d,kubernetes.io/config.seen: 2024-04-19T20:14:52.087370759Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rr7zk,Uid:7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692220698701,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-19T20:03:59.850748118Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9td9f,Uid:ea98cb5e-6a87-4ed0-8a55-26b77c219151,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692190662193,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:03:59.868755520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-423356,Uid:53792bd67366d335a595bc40683f7ee3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692162431429,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 53792bd67366d335a595bc40683f7ee3,kubernetes.io/config.seen: 2024-04-19T20:03:44.634813504Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&PodSandboxMetadata{Name:kindnet-bqwfr,Uid:1c28a900-318f-4bdc-ba7b-6cf349955c64,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713557692139099711,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kube
rnetes.io/config.seen: 2024-04-19T20:03:57.882370365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&PodSandboxMetadata{Name:kube-proxy-chd2r,Uid:316420ae-b773-4dd6-b49c-d8a9d6d34752,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692130712430,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:03:57.860497017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-423356,Uid:4b05c33463fef489faa8b093150b7c19,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1713557692097471639,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.7:8443,kubernetes.io/config.hash: 4b05c33463fef489faa8b093150b7c19,kubernetes.io/config.seen: 2024-04-19T20:03:44.634811432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&PodSandboxMetadata{Name:etcd-ha-423356,Uid:07583ed3bf1bd2cd9c408c9e17b0e324,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692096680887,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e32
4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.7:2379,kubernetes.io/config.hash: 07583ed3bf1bd2cd9c408c9e17b0e324,kubernetes.io/config.seen: 2024-04-19T20:03:44.634806937Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-423356,Uid:1d8131a46db3a9ad0004e31ea3bff211,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713557692089471607,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d8131a46db3a9ad0004e31ea3bff211,kubernetes.io/config.seen: 2024-04-19T20:03:44.634812572Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:6c5b939255eda83406ce7d311afd4c343e31b460401b7d27cbb6ade76a16df3b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:956e5c6c-de0e-4f78-9151-d456dc732bdd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713557692060341138,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePul
lPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-19T20:03:59.858630453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&PodSandboxMetadata{Name:kindnet-bqwfr,Uid:1c28a900-318f-4bdc-ba7b-6cf349955c64,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1713557690293706171,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T
20:03:57.882370365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:956e5c6c-de0e-4f78-9151-d456dc732bdd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1713557690278564294,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\
"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-19T20:03:59.858630453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wqfc4,Uid:a361495f-5d84-4133-b206-4a42fb8ba66d,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557197221962500,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-0
4-19T20:06:36.307200609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9td9f,Uid:ea98cb5e-6a87-4ed0-8a55-26b77c219151,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557040195972391,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:03:59.868755520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rr7zk,Uid:7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557040159437414,Labels:map[s
tring]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:03:59.850748118Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&PodSandboxMetadata{Name:kube-proxy-chd2r,Uid:316420ae-b773-4dd6-b49c-d8a9d6d34752,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557038466827895,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:03:57.860497017Z,ku
bernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-423356,Uid:53792bd67366d335a595bc40683f7ee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557018274945464,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 53792bd67366d335a595bc40683f7ee3,kubernetes.io/config.seen: 2024-04-19T20:03:37.775496407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&PodSandboxMetadata{Name:etcd-ha-423356,Uid:07583ed3bf1bd2cd9c408c9e17b0e324,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713557018246792249,La
bels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.7:2379,kubernetes.io/config.hash: 07583ed3bf1bd2cd9c408c9e17b0e324,kubernetes.io/config.seen: 2024-04-19T20:03:37.775489543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7dc26ad4-0cd4-4515-858a-2cdbed56195e name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.085851832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a60b0bc-7f2e-4111-a6ed-b8d30b6b12e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.085906744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a60b0bc-7f2e-4111-a6ed-b8d30b6b12e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.086328392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:938f0ef7a4d374260f7a793c335a16f00f6c936954e9fcfc99ef6af38cc71dee,PodSandboxId:6c5b939255eda83406ce7d311afd4c343e31b460401b7d27cbb6ade76a16df3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557852662006249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3
bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c1
9,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3
bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubern
etes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernet
es.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a60b0bc-7f2e-4111-a6ed-b8d30b6b12e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.131159358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3122c133-30ad-44a4-b10e-5346e0cddd05 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.131235872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3122c133-30ad-44a4-b10e-5346e0cddd05 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.132372882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d83197c-cf56-4c0b-a29a-fd0e6f29a175 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.132801334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713558011132777398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d83197c-cf56-4c0b-a29a-fd0e6f29a175 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.133583667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c4e21a9-2775-4e32-bd32-f1fb2d8bbf19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.133676636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c4e21a9-2775-4e32-bd32-f1fb2d8bbf19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.134855991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:938f0ef7a4d374260f7a793c335a16f00f6c936954e9fcfc99ef6af38cc71dee,PodSandboxId:6c5b939255eda83406ce7d311afd4c343e31b460401b7d27cbb6ade76a16df3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557852662006249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3
bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c1
9,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3
bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubern
etes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernet
es.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c4e21a9-2775-4e32-bd32-f1fb2d8bbf19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.186764129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d83b5de8-e836-4249-9a42-8a267f4747d0 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.186840699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d83b5de8-e836-4249-9a42-8a267f4747d0 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.188167393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe0ca5bf-0e13-4968-912a-3c4db5bda439 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.188628382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713558011188602729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe0ca5bf-0e13-4968-912a-3c4db5bda439 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.189340571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46c2f826-b450-4467-a345-9023afa227e0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.189396431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46c2f826-b450-4467-a345-9023afa227e0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:20:11 ha-423356 crio[4115]: time="2024-04-19 20:20:11.189862010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:938f0ef7a4d374260f7a793c335a16f00f6c936954e9fcfc99ef6af38cc71dee,PodSandboxId:6c5b939255eda83406ce7d311afd4c343e31b460401b7d27cbb6ade76a16df3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713557852662006249,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713557735661460294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd,PodSandboxId:93cf72df1c144c133bc397bc684b6881c490399c2a2b0c8e926929969f29e40b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713557734671787392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]string{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713557732667407776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c19,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f45c0debfb51d5c97169e05befa988ed00defac101f8f1d4157986f600ed7f8,PodSandboxId:4ce098ab55cd36da156a2b146fdea72d4e5c7803e7679ecf6db35ca78dcb7a8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713557725972363984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernetes.container.hash: efffbf9c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7324bf6c7d4d254f0379c935cc943cb08fc1bc55a04182460699319f1c3ac018,PodSandboxId:895e16416b862e8428d71b05e45eb0f27bec5cc40c40bf2c3dab5dad2490645c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713557706132888734,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211f18431db98436f7615a374702b84d,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad,PodSandboxId:b9bd34a0c38f4184c63b0c38dd0bc8b4a3cf3091595d9f6807ce681b41167765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713557693133204932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2,PodSandboxId:0dd6b9b5f2ae850a15a759fa0a13554768b510b33c288d1cd565428beef625ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692975607146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e,PodSandboxId:a4b7516b1af9ccc395856b43ffdbf8306435e8de3f2a20bdfaf91ffeb3aac650,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713557692947811628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd,PodSandboxId:6d9c82ce1c2c0d01cb16043835335faeabae8a76c2cdc37c5cbb9ab816bf2133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713557692883283319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5,PodSandboxId:ae6f7aaaed4e6d8a7fb811b4409883653c11ff45fcbe586efc73f792b62dbf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713557692820459828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3
bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40,PodSandboxId:1412ac6cbba35259de12bd1ec6aab3b43139b19f04dfddddd3ae99514d4144cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713557692824605322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b05c33463fef489faa8b093150b7c1
9,},Annotations:map[string]string{io.kubernetes.container.hash: 56995bd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d,PodSandboxId:8feea75542997f71ed463b97be64b9c08f50368dafccdf79c50282841c8e2fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713557692686679878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8131a46db3a9ad0004e31ea3
bff211,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc,PodSandboxId:bc5be90cc38ed9f4f56c3844824328ad8c2667441ef1e1540fdd3b13831c00a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713557690675667860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqwfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c28a900-318f-4bdc-ba7b-6cf349955c64,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 3efe16b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8933eb68a303daa804d4afa3485fac5fab09c590134ba293b3cdb33d1deed814,PodSandboxId:3b67c31972de69bd2aa05fe21d32836904aec22866b914780be0da0fe70d355b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713557690434128774,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956e5c6c-de0e-4f78-9151-d456dc732bdd,},Annotations:map[string]string{io.kubern
etes.container.hash: 3cd55650,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b80b69bd108f9da108a6c8bded65888776858241a9f3470588ba0e862718538,PodSandboxId:027b57294cfbd3e540f3d583cf77471a2f2ac84f0befbe0c3f4b7d3970cedb3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713557199513682926,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wqfc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a361495f-5d84-4133-b206-4a42fb8ba66d,},Annotations:map[string]string{io.kubernet
es.container.hash: efffbf9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5,PodSandboxId:14c798e2b76b036b220b163ed32c5f9adf08042db8765b5b3c460e7cbd00dea5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040508781871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9td9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea98cb5e-6a87-4ed0-8a55-26b77c219151,},Annotations:map[string]string{io.kubernetes.container.hash: 6773107c,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24,PodSandboxId:8a34b24c4a7dd2398438b7f2864b5b708e07c0f8d63395de9a59f97058a5469c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713557040394795877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-rr7zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa96cbe-afd8-4b8f-bc62-ab6b38811bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 83643fdf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573,PodSandboxId:a9af78af7cd87f3d1a5966421c02d7d31c5cc79312c72f13212a02b83ae24048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713557038567321408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316420ae-b773-4dd6-b49c-d8a9d6d34752,},Annotations:map[string]string{io.kubernetes.container.hash: 94da94bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861,PodSandboxId:68e93a81da913971ff7ee3e2dd6bdeaa986564dd3e56073e6529e1310f2f1e2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713557018532708264,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53792bd67366d335a595bc40683f7ee3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d,PodSandboxId:9ba5078b4acef509746a556758e3918118f6f67aa3d8d06cf34d8432f3180f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_EXITED,CreatedAt:1713557018403502739,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-423356,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07583ed3bf1bd2cd9c408c9e17b0e324,},Annotations:map[string]string{io.kubernetes.container.hash: e92c52d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46c2f826-b450-4467-a345-9023afa227e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	938f0ef7a4d37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   6c5b939255eda       storage-provisioner
	0536a8eca2340       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   8feea75542997       kube-controller-manager-ha-423356
	91b7d5d464a5c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   93cf72df1c144       kindnet-bqwfr
	3f764732cb42d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   1412ac6cbba35       kube-apiserver-ha-423356
	3f45c0debfb51       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   4ce098ab55cd3       busybox-fc5497c4f-wqfc4
	7324bf6c7d4d2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   895e16416b862       kube-vip-ha-423356
	81c24d896b86f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   b9bd34a0c38f4       kube-proxy-chd2r
	e67b63d64b788       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0dd6b9b5f2ae8       coredns-7db6d8ff4d-rr7zk
	51ec7d0458ebe       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   a4b7516b1af9c       kube-scheduler-ha-423356
	331f89f692a2d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6d9c82ce1c2c0       coredns-7db6d8ff4d-9td9f
	31e5d247baaae       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   1412ac6cbba35       kube-apiserver-ha-423356
	80df63a7dd481       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   ae6f7aaaed4e6       etcd-ha-423356
	483cbd68c3bcc       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   8feea75542997       kube-controller-manager-ha-423356
	81b2b256c447c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   bc5be90cc38ed       kindnet-bqwfr
	8933eb68a303d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   3b67c31972de6       storage-provisioner
	3b80b69bd108f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   027b57294cfbd       busybox-fc5497c4f-wqfc4
	dcfa7c435542c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   14c798e2b76b0       coredns-7db6d8ff4d-9td9f
	2382f52abc364       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   8a34b24c4a7dd       coredns-7db6d8ff4d-rr7zk
	b5377046480e9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago      Exited              kube-proxy                0                   a9af78af7cd87       kube-proxy-chd2r
	7f1baf88d5884       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   68e93a81da913       kube-scheduler-ha-423356
	1572778d3f528       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   9ba5078b4acef       etcd-ha-423356
	
	
	==> coredns [2382f52abc3649860a6b41fba456db8c82c266877180779107eb52b907bb7d24] <==
	[INFO] 10.244.1.2:34902 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201571s
	[INFO] 10.244.1.2:53225 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001465991s
	[INFO] 10.244.1.2:59754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258304s
	[INFO] 10.244.1.2:59316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128123s
	[INFO] 10.244.1.2:48977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110722s
	[INFO] 10.244.0.4:40375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001793494s
	[INFO] 10.244.0.4:60622 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049591s
	[INFO] 10.244.0.4:34038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003778s
	[INFO] 10.244.0.4:51412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043214s
	[INFO] 10.244.0.4:56955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042946s
	[INFO] 10.244.2.2:46864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134976s
	[INFO] 10.244.2.2:34230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011483s
	[INFO] 10.244.1.2:38189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097409s
	[INFO] 10.244.1.2:33041 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080538s
	[INFO] 10.244.0.4:37791 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018566s
	[INFO] 10.244.0.4:46485 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061131s
	[INFO] 10.244.0.4:50872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086293s
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142168s
	[INFO] 10.244.1.2:55061 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177752s
	[INFO] 10.244.0.4:44369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008812s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1890&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1878&timeout=9m34s&timeoutSeconds=574&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [331f89f692a2de834ea63f48181cde340dba89d8ea05a8b03caedc368449cdfd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46366->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46366->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46344->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:46344->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1366665584]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Apr-2024 20:15:08.103) (total time: 10199ms):
	Trace[1366665584]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer 10199ms (20:15:18.303)
	Trace[1366665584]: [10.199297468s] [10.199297468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:46350->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [dcfa7c435542c1b26e6deb1d242775ddf59f385fe77795966a81db8de0a726f5] <==
	[INFO] 10.244.2.2:49259 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138019s
	[INFO] 10.244.1.2:50375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000377277s
	[INFO] 10.244.1.2:43502 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001916758s
	[INFO] 10.244.0.4:50440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109012s
	[INFO] 10.244.0.4:50457 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001351323s
	[INFO] 10.244.0.4:57273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119319s
	[INFO] 10.244.2.2:49275 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210181s
	[INFO] 10.244.2.2:41514 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192084s
	[INFO] 10.244.1.2:56219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000465859s
	[INFO] 10.244.1.2:60572 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114905s
	[INFO] 10.244.0.4:52874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098566s
	[INFO] 10.244.2.2:47734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249839s
	[INFO] 10.244.2.2:50981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179648s
	[INFO] 10.244.2.2:34738 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109005s
	[INFO] 10.244.1.2:37966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181053s
	[INFO] 10.244.1.2:48636 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116821s
	[INFO] 10.244.1.2:52580 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000260337s
	[INFO] 10.244.0.4:43327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088111s
	[INFO] 10.244.0.4:47823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105899s
	[INFO] 10.244.0.4:41223 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050192s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [e67b63d64b788b3a1e1ddf751d845bc5e09e599c36e4427bcd02e5c41b9f65d2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1183407982]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Apr-2024 20:15:05.264) (total time: 13039ms):
	Trace[1183407982]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer 13039ms (20:15:18.304)
	Trace[1183407982]: [13.039743899s] [13.039743899s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58100->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58100->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_03_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:20:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:18:43 +0000   Fri, 19 Apr 2024 20:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:18:43 +0000   Fri, 19 Apr 2024 20:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:18:43 +0000   Fri, 19 Apr 2024 20:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:18:43 +0000   Fri, 19 Apr 2024 20:18:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-423356
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133e52820e114c7aa16933b82eb1ac6a
	  System UUID:                133e5282-0e11-4c7a-a169-33b82eb1ac6a
	  Boot ID:                    752cc004-2412-44ee-9782-2d20c1c3993d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqfc4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-9td9f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-rr7zk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-423356                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-bqwfr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-423356             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-423356    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-chd2r                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-423356             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-423356                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Warning  ContainerGCFailed        5m27s (x2 over 6m27s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-423356 event: Registered Node ha-423356 in Controller
	  Normal   NodeNotReady             111s                   node-controller  Node ha-423356 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     88s (x2 over 16m)      kubelet          Node ha-423356 status is now: NodeHasSufficientPID
	  Normal   NodeReady                88s (x2 over 16m)      kubelet          Node ha-423356 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    88s (x2 over 16m)      kubelet          Node ha-423356 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  88s (x2 over 16m)      kubelet          Node ha-423356 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-423356-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_04_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:20:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:16:25 +0000   Fri, 19 Apr 2024 20:15:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-423356-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 346b871eba5f43789a16ce3dbbb4ec2c
	  System UUID:                346b871e-ba5f-4378-9a16-ce3dbbb4ec2c
	  Boot ID:                    7489ab85-d407-430f-8104-10a2700c6b0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fq5c2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-423356-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-7ktc2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-423356-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-423356-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-d56ch                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-423356-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-423356-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-423356-m02 status is now: NodeNotReady
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m55s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m55s)  kubelet          Node ha-423356-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m55s)  kubelet          Node ha-423356-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	  Normal  RegisteredNode           3m5s                   node-controller  Node ha-423356-m02 event: Registered Node ha-423356-m02 in Controller
	
	
	Name:               ha-423356-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-423356-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=ha-423356
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:07:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423356-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:17:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:18:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:18:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:18:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Apr 2024 20:17:22 +0000   Fri, 19 Apr 2024 20:18:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-423356-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f1d7a6307945baa5aa5c71ec020b88
	  System UUID:                22f1d7a6-3079-45ba-a5aa-5c71ec020b88
	  Boot ID:                    99ccda9d-ec57-499a-a554-7417a225d5a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2jgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-wj85m              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-7x69m           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-423356-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-423356-m04 event: Registered Node ha-423356-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-423356-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-423356-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-423356-m04 has been rebooted, boot id: 99ccda9d-ec57-499a-a554-7417a225d5a2
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-423356-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m41s)   node-controller  Node ha-423356-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.106083] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.064815] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057768] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.176439] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.158804] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285884] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.425458] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.068806] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.328517] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.914724] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.592182] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.083040] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.869346] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:04] kauditd_printk_skb: 76 callbacks suppressed
	[Apr19 20:11] kauditd_printk_skb: 1 callbacks suppressed
	[Apr19 20:14] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[  +0.188401] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[  +0.278508] systemd-fstab-generator[3988]: Ignoring "noauto" option for root device
	[  +0.189057] systemd-fstab-generator[4035]: Ignoring "noauto" option for root device
	[  +0.334823] systemd-fstab-generator[4085]: Ignoring "noauto" option for root device
	[  +1.129238] systemd-fstab-generator[4341]: Ignoring "noauto" option for root device
	[  +3.559773] kauditd_printk_skb: 236 callbacks suppressed
	[Apr19 20:15] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [1572778d3f528d9a66ae0ff8206998407060fe2f6502f984fdb12f79f08ad45d] <==
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:13:18.620554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:13:12.228251Z","time spent":"6.392299825s","remote":"127.0.0.1:54206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true "}
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-19T20:13:18.615971Z","caller":"traceutil/trace.go:171","msg":"trace[104768494] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"6.744332203s","start":"2024-04-19T20:13:11.871635Z","end":"2024-04-19T20:13:18.615967Z","steps":["trace[104768494] 'agreement among raft nodes before linearized reading'  (duration: 6.728393718s)"],"step_count":1}
	2024/04/19 20:13:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:13:18.673635Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:13:18.673697Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-19T20:13:18.675273Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bb39151d8411994b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-19T20:13:18.675503Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675585Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675657Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675745Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675901Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.675976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.676038Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"44173b90a3cc0cfa"}
	{"level":"info","ts":"2024-04-19T20:13:18.676154Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676207Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676254Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676368Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676433Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676586Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.676663Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:13:18.679746Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-04-19T20:13:18.679937Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-04-19T20:13:18.679974Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-423356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	
	
	==> etcd [80df63a7dd481fbe2445d34585b14e64d4ef4aaa65d859cb31f24ea66aa45fd5] <==
	{"level":"info","ts":"2024-04-19T20:16:47.688003Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:16:47.689462Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:16:47.706845Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.111:45428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-19T20:16:49.261609Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-19T20:16:49.26173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e763362e070ef6ce","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-19T20:16:58.539492Z","caller":"traceutil/trace.go:171","msg":"trace[1123723025] transaction","detail":"{read_only:false; response_revision:2453; number_of_response:1; }","duration":"100.117958ms","start":"2024-04-19T20:16:58.439348Z","end":"2024-04-19T20:16:58.539466Z","steps":["trace[1123723025] 'process raft request'  (duration: 100.005726ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:17:26.171967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.721343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-7x69m\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2024-04-19T20:17:26.172331Z","caller":"traceutil/trace.go:171","msg":"trace[1352967015] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-7x69m; range_end:; response_count:1; response_revision:2542; }","duration":"166.260276ms","start":"2024-04-19T20:17:26.00603Z","end":"2024-04-19T20:17:26.172291Z","steps":["trace[1352967015] 'range keys from in-memory index tree'  (duration: 164.615072ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:17:36.990716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b switched to configuration voters=(4906455811452833018 13490837375279012171)"}
	{"level":"info","ts":"2024-04-19T20:17:36.994634Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","removed-remote-peer-id":"e763362e070ef6ce","removed-remote-peer-urls":["https://192.168.39.111:2380"]}
	{"level":"info","ts":"2024-04-19T20:17:36.994792Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:36.996265Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:17:36.996971Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:36.997576Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:17:36.997638Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:17:36.997826Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:36.998208Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce","error":"context canceled"}
	{"level":"warn","ts":"2024-04-19T20:17:36.998297Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e763362e070ef6ce","error":"failed to read e763362e070ef6ce on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-19T20:17:36.998369Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:36.998589Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce","error":"context canceled"}
	{"level":"info","ts":"2024-04-19T20:17:36.998647Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:17:36.998696Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e763362e070ef6ce"}
	{"level":"info","ts":"2024-04-19T20:17:36.998735Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"bb39151d8411994b","removed-remote-peer-id":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:37.017683Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bb39151d8411994b","remote-peer-id-stream-handler":"bb39151d8411994b","remote-peer-id-from":"e763362e070ef6ce"}
	{"level":"warn","ts":"2024-04-19T20:17:37.018238Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.111:52178","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:20:11 up 17 min,  0 users,  load average: 0.11, 0.35, 0.26
	Linux ha-423356 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [81b2b256c447c3400a637b9536ca23e1a9b09ec09ac0b8afaae55d0d36e612fc] <==
	I0419 20:14:51.041032       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0419 20:14:51.041186       1 main.go:107] hostIP = 192.168.39.7
	podIP = 192.168.39.7
	I0419 20:14:51.041415       1 main.go:116] setting mtu 1500 for CNI 
	I0419 20:14:51.041460       1 main.go:146] kindnetd IP family: "ipv4"
	I0419 20:14:51.041484       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	
	==> kindnet [91b7d5d464a5cc0e8126c2169230befae125061cb4ac7fd66c1c8d4d385d34dd] <==
	I0419 20:19:28.241213       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:19:38.249725       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:19:38.250005       1 main.go:227] handling current node
	I0419 20:19:38.250135       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:19:38.250189       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:19:38.250408       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:19:38.250455       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:19:48.259398       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:19:48.259821       1 main.go:227] handling current node
	I0419 20:19:48.259931       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:19:48.259958       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:19:48.260169       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:19:48.260202       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:19:58.267782       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:19:58.267923       1 main.go:227] handling current node
	I0419 20:19:58.267957       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:19:58.267975       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:19:58.268169       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:19:58.268211       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	I0419 20:20:08.282603       1 main.go:223] Handling node with IPs: map[192.168.39.7:{}]
	I0419 20:20:08.282712       1 main.go:227] handling current node
	I0419 20:20:08.282737       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0419 20:20:08.282755       1 main.go:250] Node ha-423356-m02 has CIDR [10.244.1.0/24] 
	I0419 20:20:08.282940       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0419 20:20:08.283227       1 main.go:250] Node ha-423356-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [31e5d247baaae659d4d4269f887e628df0ad909c2d191a664f7461a387905f40] <==
	I0419 20:14:53.587770       1 options.go:221] external host was not specified, using 192.168.39.7
	I0419 20:14:53.592718       1 server.go:148] Version: v1.30.0
	I0419 20:14:53.592782       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:14:54.519103       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0419 20:14:54.522042       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 20:14:54.522126       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 20:14:54.522205       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:14:54.522275       1 instance.go:299] Using reconciler: lease
	W0419 20:15:14.515689       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0419 20:15:14.515900       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0419 20:15:14.523309       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [3f764732cb42d1f170d90225033030961351617ab86c60f9cac878bfd0f3bdd7] <==
	I0419 20:15:37.832800       1 aggregator.go:163] waiting for initial CRD sync...
	I0419 20:15:37.889462       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0419 20:15:37.889504       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 20:15:37.941184       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:15:37.941228       1 policy_source.go:224] refreshing policies
	I0419 20:15:37.962482       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:15:37.989951       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 20:15:37.990135       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:15:37.990152       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:15:37.990159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:15:37.990247       1 cache.go:39] Caches are synced for autoregister controller
	I0419 20:15:38.031429       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 20:15:38.031476       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 20:15:38.031518       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:15:38.031618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 20:15:38.032028       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:15:38.032493       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 20:15:38.038014       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 20:15:38.038390       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0419 20:15:38.047502       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0419 20:15:38.837542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0419 20:15:39.272885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0419 20:15:39.274619       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 20:15:39.282621       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0419 20:17:59.277246       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.7]
	
	
	==> kube-controller-manager [0536a8eca23403d3ad8ecfa3c6d3489e121bf4ec88c179b5699544ab07fd547d] <==
	E0419 20:18:30.484592       1 gc_controller.go:153] "Failed to get node" err="node \"ha-423356-m03\" not found" logger="pod-garbage-collector-controller" node="ha-423356-m03"
	E0419 20:18:30.484596       1 gc_controller.go:153] "Failed to get node" err="node \"ha-423356-m03\" not found" logger="pod-garbage-collector-controller" node="ha-423356-m03"
	I0419 20:18:30.496852       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-423356-m03"
	I0419 20:18:30.529285       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-423356-m03"
	I0419 20:18:30.529339       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-423356-m03"
	I0419 20:18:30.577365       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-423356-m03"
	I0419 20:18:30.577450       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sr4gd"
	I0419 20:18:30.607420       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sr4gd"
	I0419 20:18:30.607530       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-423356-m03"
	I0419 20:18:30.636283       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-423356-m03"
	I0419 20:18:30.636332       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fkd5h"
	I0419 20:18:30.683659       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fkd5h"
	I0419 20:18:30.683706       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-423356-m03"
	I0419 20:18:30.716342       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-423356-m03"
	I0419 20:18:30.716438       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-423356-m03"
	I0419 20:18:30.748930       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-423356-m03"
	I0419 20:18:44.762916       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-5drsc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-5drsc\": the object has been modified; please apply your changes to the latest version and try again"
	I0419 20:18:44.763422       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3bdd6b35-253f-454a-81b2-7329ecd9610d", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-5drsc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-5drsc": the object has been modified; please apply your changes to the latest version and try again
	I0419 20:18:44.805744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.613005ms"
	I0419 20:18:44.837272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.378376ms"
	I0419 20:18:44.837625       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-5drsc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-5drsc\": the object has been modified; please apply your changes to the latest version and try again"
	I0419 20:18:44.839002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.433µs"
	I0419 20:18:44.840137       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"3bdd6b35-253f-454a-81b2-7329ecd9610d", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-5drsc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-5drsc": the object has been modified; please apply your changes to the latest version and try again
	I0419 20:18:45.021237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.10147ms"
	I0419 20:18:45.021359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.342µs"
	
	
	==> kube-controller-manager [483cbd68c3bcc8236b0f9be658b47b1b279f954d78487e2314352b0d07f3de5d] <==
	I0419 20:14:54.659457       1 serving.go:380] Generated self-signed cert in-memory
	I0419 20:14:55.115274       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0419 20:14:55.115316       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:14:55.116893       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0419 20:14:55.117130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:14:55.117235       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0419 20:14:55.117441       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0419 20:15:15.529626       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.7:8443/healthz\": dial tcp 192.168.39.7:8443: connect: connection refused"
	
	
	==> kube-proxy [81c24d896b86fdac002733a7f2512c23e9050a4f8aa29c0c0c763aab1c6b35ad] <==
	I0419 20:14:55.154589       1 server_linux.go:69] "Using iptables proxy"
	E0419 20:14:56.798758       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:14:59.872420       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:02.943728       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:09.086602       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0419 20:15:21.377509       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0419 20:15:40.251977       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0419 20:15:40.297024       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:15:40.297150       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:15:40.297169       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:15:40.299910       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:15:40.300329       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:15:40.300384       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:15:40.301849       1 config.go:192] "Starting service config controller"
	I0419 20:15:40.301931       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:15:40.301981       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:15:40.301999       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:15:40.302727       1 config.go:319] "Starting node config controller"
	I0419 20:15:40.309815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:15:40.309865       1 shared_informer.go:320] Caches are synced for node config
	I0419 20:15:40.403013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:15:40.403190       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [b5377046480e9dac33b78a5042e64677670b3dd567b57fcdceb16651ada44573] <==
	E0419 20:12:04.767495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:07.838690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:07.838712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.983879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.983945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.984012       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.984130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:13.984843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:13.984966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:23.199615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:23.200013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:26.271600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:26.271945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:26.272028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:26.272152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:44.703558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:44.703623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-423356&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:47.775703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:47.775802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0419 20:12:50.847775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0419 20:12:50.847853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [51ec7d0458ebe810b63b1120ed6afd344fcaaeccf1c63eac068cb11a26fc617e] <==
	W0419 20:15:31.723340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0419 20:15:31.723486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	W0419 20:15:37.896825       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:15:37.897556       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:15:37.937544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:15:37.937647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:15:37.937819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 20:15:37.937864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 20:15:37.938041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0419 20:15:37.938155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0419 20:15:37.938255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:15:37.938288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:15:37.938370       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 20:15:37.940137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 20:15:37.940281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 20:15:37.940335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 20:15:37.940415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:15:37.940448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:15:37.940516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:15:37.940602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:15:37.940728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:15:37.940790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 20:15:37.940924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0419 20:15:37.940986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0419 20:16:03.148334       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7f1baf88d588480bff0c1adef0cf006f60a7af0e1cc90a6c2f9ec6a83b9a3861] <==
	W0419 20:13:16.545364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:13:16.545441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:13:16.565907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 20:13:16.565972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0419 20:13:16.575881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 20:13:16.576149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 20:13:16.588440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:13:16.588573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:13:16.615131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:13:16.616748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:13:16.616629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 20:13:16.617349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0419 20:13:16.963270       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:13:16.963394       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:13:17.078162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.078335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0419 20:13:17.210790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:13:17.210869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:13:17.334291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0419 20:13:17.334412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0419 20:13:17.360234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.360327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0419 20:13:17.443256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:17.443510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:13:18.559682       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 19 20:18:29 ha-423356 kubelet[1380]: E0419 20:18:29.514167    1380 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-423356?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 19 20:18:32 ha-423356 kubelet[1380]: E0419 20:18:32.130914    1380 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-423356\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844326    1380 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844419    1380 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844456    1380 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844477    1380 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844496    1380 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844514    1380 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.844537    1380 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: E0419 20:18:33.844611    1380 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-423356\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-423356?timeout=10s\": http2: client connection lost"
	Apr 19 20:18:33 ha-423356 kubelet[1380]: E0419 20:18:33.844622    1380 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 19 20:18:33 ha-423356 kubelet[1380]: E0419 20:18:33.844667    1380 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-423356?timeout=10s\": http2: client connection lost"
	Apr 19 20:18:33 ha-423356 kubelet[1380]: I0419 20:18:33.844691    1380 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.845218    1380 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:33 ha-423356 kubelet[1380]: W0419 20:18:33.845285    1380 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 19 20:18:44 ha-423356 kubelet[1380]: E0419 20:18:44.680881    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:18:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:18:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:18:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:18:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:19:44 ha-423356 kubelet[1380]: E0419 20:19:44.669745    1380 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:19:44 ha-423356 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:19:44 ha-423356 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:19:44 ha-423356 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:19:44 ha-423356 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:20:10.708953  397961 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-423356 -n ha-423356
helpers_test.go:261: (dbg) Run:  kubectl --context ha-423356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (302.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151935
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-151935
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-151935: exit status 82 (2m2.049065205s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-151935-m03"  ...
	* Stopping node "multinode-151935-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-151935" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151935 --wait=true -v=8 --alsologtostderr
E0419 20:37:10.228193  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151935 --wait=true -v=8 --alsologtostderr: (2m57.792108618s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151935
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-151935 -n multinode-151935
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-151935 logs -n 25: (1.690392117s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935:/home/docker/cp-test_multinode-151935-m02_multinode-151935.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935 sudo cat                                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m02_multinode-151935.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03:/home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935-m03 sudo cat                                   | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp testdata/cp-test.txt                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935:/home/docker/cp-test_multinode-151935-m03_multinode-151935.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935 sudo cat                                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m03_multinode-151935.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02:/home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935-m02 sudo cat                                   | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-151935 node stop m03                                                          | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	| node    | multinode-151935 node start                                                             | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-151935                                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:34 UTC |                     |
	| stop    | -p multinode-151935                                                                     | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:34 UTC |                     |
	| start   | -p multinode-151935                                                                     | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:36 UTC | 19 Apr 24 20:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-151935                                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:36:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:36:25.803129  407144 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:36:25.803306  407144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:36:25.803315  407144 out.go:304] Setting ErrFile to fd 2...
	I0419 20:36:25.803319  407144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:36:25.803528  407144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:36:25.804417  407144 out.go:298] Setting JSON to false
	I0419 20:36:25.805481  407144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8332,"bootTime":1713550654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:36:25.805566  407144 start.go:139] virtualization: kvm guest
	I0419 20:36:25.808049  407144 out.go:177] * [multinode-151935] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:36:25.809997  407144 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:36:25.809966  407144 notify.go:220] Checking for updates...
	I0419 20:36:25.811273  407144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:36:25.812648  407144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:36:25.814374  407144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:36:25.815864  407144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:36:25.817234  407144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:36:25.818989  407144 config.go:182] Loaded profile config "multinode-151935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:36:25.819109  407144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:36:25.819753  407144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:36:25.819805  407144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:36:25.836258  407144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0419 20:36:25.836702  407144 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:36:25.837313  407144 main.go:141] libmachine: Using API Version  1
	I0419 20:36:25.837351  407144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:36:25.837724  407144 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:36:25.837911  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.874451  407144 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:36:25.875983  407144 start.go:297] selected driver: kvm2
	I0419 20:36:25.876001  407144 start.go:901] validating driver "kvm2" against &{Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:36:25.876223  407144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:36:25.876594  407144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:36:25.876694  407144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:36:25.891533  407144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:36:25.892241  407144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:36:25.892310  407144 cni.go:84] Creating CNI manager for ""
	I0419 20:36:25.892322  407144 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 20:36:25.892387  407144 start.go:340] cluster config:
	{Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:36:25.892544  407144 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:36:25.895070  407144 out.go:177] * Starting "multinode-151935" primary control-plane node in "multinode-151935" cluster
	I0419 20:36:25.896295  407144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:36:25.896340  407144 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:36:25.896355  407144 cache.go:56] Caching tarball of preloaded images
	I0419 20:36:25.896446  407144 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:36:25.896458  407144 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:36:25.896600  407144 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/config.json ...
	I0419 20:36:25.896836  407144 start.go:360] acquireMachinesLock for multinode-151935: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:36:25.896883  407144 start.go:364] duration metric: took 26.213µs to acquireMachinesLock for "multinode-151935"
	I0419 20:36:25.896903  407144 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:36:25.896914  407144 fix.go:54] fixHost starting: 
	I0419 20:36:25.897209  407144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:36:25.897245  407144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:36:25.912127  407144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0419 20:36:25.912580  407144 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:36:25.913118  407144 main.go:141] libmachine: Using API Version  1
	I0419 20:36:25.913141  407144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:36:25.913515  407144 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:36:25.913707  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.913877  407144 main.go:141] libmachine: (multinode-151935) Calling .GetState
	I0419 20:36:25.915479  407144 fix.go:112] recreateIfNeeded on multinode-151935: state=Running err=<nil>
	W0419 20:36:25.915501  407144 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:36:25.917311  407144 out.go:177] * Updating the running kvm2 "multinode-151935" VM ...
	I0419 20:36:25.918464  407144 machine.go:94] provisionDockerMachine start ...
	I0419 20:36:25.918481  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.918679  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:25.921051  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:25.921481  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:25.921506  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:25.921634  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:25.921791  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:25.921946  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:25.922106  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:25.922240  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:25.922472  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:25.922484  407144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:36:26.038786  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151935
	
	I0419 20:36:26.038817  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.039120  407144 buildroot.go:166] provisioning hostname "multinode-151935"
	I0419 20:36:26.039149  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.039327  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.041843  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.042250  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.042285  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.042388  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.042536  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.042715  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.042833  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.042993  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.043244  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.043261  407144 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-151935 && echo "multinode-151935" | sudo tee /etc/hostname
	I0419 20:36:26.170014  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151935
	
	I0419 20:36:26.170051  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.172804  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.173123  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.173158  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.173308  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.173531  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.173691  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.173824  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.173938  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.174137  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.174152  407144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-151935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-151935/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-151935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:36:26.294082  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:36:26.294118  407144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:36:26.294141  407144 buildroot.go:174] setting up certificates
	I0419 20:36:26.294152  407144 provision.go:84] configureAuth start
	I0419 20:36:26.294161  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.294517  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:36:26.297591  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.297998  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.298026  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.298206  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.300717  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.301109  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.301142  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.301258  407144 provision.go:143] copyHostCerts
	I0419 20:36:26.301295  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:36:26.301348  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:36:26.301361  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:36:26.301430  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:36:26.301543  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:36:26.301562  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:36:26.301569  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:36:26.301594  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:36:26.301647  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:36:26.301663  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:36:26.301676  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:36:26.301698  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:36:26.301751  407144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.multinode-151935 san=[127.0.0.1 192.168.39.193 localhost minikube multinode-151935]
	I0419 20:36:26.442167  407144 provision.go:177] copyRemoteCerts
	I0419 20:36:26.442230  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:36:26.442258  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.445479  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.445805  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.445835  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.446085  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.446269  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.446461  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.446647  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:36:26.531216  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:36:26.531300  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:36:26.560108  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:36:26.560185  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:36:26.587458  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:36:26.587537  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0419 20:36:26.614424  407144 provision.go:87] duration metric: took 320.255427ms to configureAuth
	I0419 20:36:26.614462  407144 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:36:26.614769  407144 config.go:182] Loaded profile config "multinode-151935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:36:26.614853  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.617435  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.617810  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.617839  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.618032  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.618252  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.618405  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.618546  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.618724  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.618895  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.618911  407144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:37:57.447685  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:37:57.447759  407144 machine.go:97] duration metric: took 1m31.529281382s to provisionDockerMachine
	I0419 20:37:57.447778  407144 start.go:293] postStartSetup for "multinode-151935" (driver="kvm2")
	I0419 20:37:57.447790  407144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:37:57.447817  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.448175  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:37:57.448215  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.451589  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.452114  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.452146  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.452340  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.452562  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.452761  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.452938  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.541370  407144 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:37:57.546155  407144 command_runner.go:130] > NAME=Buildroot
	I0419 20:37:57.546178  407144 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 20:37:57.546185  407144 command_runner.go:130] > ID=buildroot
	I0419 20:37:57.546193  407144 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 20:37:57.546201  407144 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 20:37:57.546240  407144 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:37:57.546288  407144 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:37:57.546354  407144 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:37:57.546447  407144 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:37:57.546462  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:37:57.546552  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:37:57.556819  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:37:57.583513  407144 start.go:296] duration metric: took 135.719505ms for postStartSetup
	I0419 20:37:57.583568  407144 fix.go:56] duration metric: took 1m31.686655628s for fixHost
	I0419 20:37:57.583597  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.586777  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.587212  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.587235  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.587396  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.587607  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.587759  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.587878  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.588069  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:37:57.588249  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:37:57.588261  407144 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:37:57.697797  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713559077.679570718
	
	I0419 20:37:57.697831  407144 fix.go:216] guest clock: 1713559077.679570718
	I0419 20:37:57.697860  407144 fix.go:229] Guest: 2024-04-19 20:37:57.679570718 +0000 UTC Remote: 2024-04-19 20:37:57.583573936 +0000 UTC m=+91.830385825 (delta=95.996782ms)
	I0419 20:37:57.697910  407144 fix.go:200] guest clock delta is within tolerance: 95.996782ms
	I0419 20:37:57.697916  407144 start.go:83] releasing machines lock for "multinode-151935", held for 1m31.801020731s
	I0419 20:37:57.697938  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.698222  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:37:57.700894  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.701286  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.701314  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.701462  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702000  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702210  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702278  407144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:37:57.702325  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.702427  407144 ssh_runner.go:195] Run: cat /version.json
	I0419 20:37:57.702452  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.705273  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705492  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705779  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.705805  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705894  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.705923  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705989  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.706253  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.706342  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.706475  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.706550  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.706606  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.706744  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.706881  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.786184  407144 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0419 20:37:57.786348  407144 ssh_runner.go:195] Run: systemctl --version
	I0419 20:37:57.821378  407144 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 20:37:57.821427  407144 command_runner.go:130] > systemd 252 (252)
	I0419 20:37:57.821446  407144 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 20:37:57.821519  407144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:37:57.986198  407144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 20:37:57.992452  407144 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 20:37:57.992520  407144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:37:57.992592  407144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:37:58.003212  407144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:37:58.003243  407144 start.go:494] detecting cgroup driver to use...
	I0419 20:37:58.003307  407144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:37:58.020749  407144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:37:58.036061  407144 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:37:58.036129  407144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:37:58.050866  407144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:37:58.065449  407144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:37:58.220346  407144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:37:58.365779  407144 docker.go:233] disabling docker service ...
	I0419 20:37:58.365907  407144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:37:58.382445  407144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:37:58.397620  407144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:37:58.546387  407144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:37:58.690971  407144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:37:58.705896  407144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:37:58.726346  407144 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0419 20:37:58.726939  407144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:37:58.727008  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.739072  407144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:37:58.739141  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.750596  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.762176  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.773657  407144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:37:58.785295  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.796806  407144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.808986  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.819802  407144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:37:58.829679  407144 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 20:37:58.829777  407144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:37:58.839266  407144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:37:58.983818  407144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:37:59.247690  407144 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:37:59.247758  407144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:37:59.252912  407144 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0419 20:37:59.252940  407144 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 20:37:59.252950  407144 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I0419 20:37:59.252960  407144 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 20:37:59.252968  407144 command_runner.go:130] > Access: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.252987  407144 command_runner.go:130] > Modify: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.252999  407144 command_runner.go:130] > Change: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.253004  407144 command_runner.go:130] >  Birth: -
	I0419 20:37:59.253124  407144 start.go:562] Will wait 60s for crictl version
	I0419 20:37:59.253193  407144 ssh_runner.go:195] Run: which crictl
	I0419 20:37:59.257123  407144 command_runner.go:130] > /usr/bin/crictl
	I0419 20:37:59.257203  407144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:37:59.295793  407144 command_runner.go:130] > Version:  0.1.0
	I0419 20:37:59.295815  407144 command_runner.go:130] > RuntimeName:  cri-o
	I0419 20:37:59.295820  407144 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0419 20:37:59.295825  407144 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 20:37:59.295986  407144 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:37:59.296086  407144 ssh_runner.go:195] Run: crio --version
	I0419 20:37:59.326782  407144 command_runner.go:130] > crio version 1.29.1
	I0419 20:37:59.326814  407144 command_runner.go:130] > Version:        1.29.1
	I0419 20:37:59.326824  407144 command_runner.go:130] > GitCommit:      unknown
	I0419 20:37:59.326835  407144 command_runner.go:130] > GitCommitDate:  unknown
	I0419 20:37:59.326842  407144 command_runner.go:130] > GitTreeState:   clean
	I0419 20:37:59.326851  407144 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0419 20:37:59.326858  407144 command_runner.go:130] > GoVersion:      go1.21.6
	I0419 20:37:59.326865  407144 command_runner.go:130] > Compiler:       gc
	I0419 20:37:59.326873  407144 command_runner.go:130] > Platform:       linux/amd64
	I0419 20:37:59.326879  407144 command_runner.go:130] > Linkmode:       dynamic
	I0419 20:37:59.326904  407144 command_runner.go:130] > BuildTags:      
	I0419 20:37:59.326915  407144 command_runner.go:130] >   containers_image_ostree_stub
	I0419 20:37:59.326923  407144 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0419 20:37:59.326930  407144 command_runner.go:130] >   btrfs_noversion
	I0419 20:37:59.326939  407144 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0419 20:37:59.326945  407144 command_runner.go:130] >   libdm_no_deferred_remove
	I0419 20:37:59.326953  407144 command_runner.go:130] >   seccomp
	I0419 20:37:59.326960  407144 command_runner.go:130] > LDFlags:          unknown
	I0419 20:37:59.326970  407144 command_runner.go:130] > SeccompEnabled:   true
	I0419 20:37:59.326977  407144 command_runner.go:130] > AppArmorEnabled:  false
	I0419 20:37:59.328167  407144 ssh_runner.go:195] Run: crio --version
	I0419 20:37:59.357676  407144 command_runner.go:130] > crio version 1.29.1
	I0419 20:37:59.357703  407144 command_runner.go:130] > Version:        1.29.1
	I0419 20:37:59.357710  407144 command_runner.go:130] > GitCommit:      unknown
	I0419 20:37:59.357714  407144 command_runner.go:130] > GitCommitDate:  unknown
	I0419 20:37:59.357718  407144 command_runner.go:130] > GitTreeState:   clean
	I0419 20:37:59.357724  407144 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0419 20:37:59.357728  407144 command_runner.go:130] > GoVersion:      go1.21.6
	I0419 20:37:59.357732  407144 command_runner.go:130] > Compiler:       gc
	I0419 20:37:59.357736  407144 command_runner.go:130] > Platform:       linux/amd64
	I0419 20:37:59.357741  407144 command_runner.go:130] > Linkmode:       dynamic
	I0419 20:37:59.357745  407144 command_runner.go:130] > BuildTags:      
	I0419 20:37:59.357752  407144 command_runner.go:130] >   containers_image_ostree_stub
	I0419 20:37:59.357759  407144 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0419 20:37:59.357765  407144 command_runner.go:130] >   btrfs_noversion
	I0419 20:37:59.357772  407144 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0419 20:37:59.357779  407144 command_runner.go:130] >   libdm_no_deferred_remove
	I0419 20:37:59.357786  407144 command_runner.go:130] >   seccomp
	I0419 20:37:59.357793  407144 command_runner.go:130] > LDFlags:          unknown
	I0419 20:37:59.357799  407144 command_runner.go:130] > SeccompEnabled:   true
	I0419 20:37:59.357804  407144 command_runner.go:130] > AppArmorEnabled:  false
	I0419 20:37:59.359685  407144 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:37:59.361159  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:37:59.363527  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:59.363964  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:59.363995  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:59.364172  407144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:37:59.368489  407144 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0419 20:37:59.368619  407144 kubeadm.go:877] updating cluster {Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:37:59.368790  407144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:37:59.368850  407144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:37:59.414453  407144 command_runner.go:130] > {
	I0419 20:37:59.414479  407144 command_runner.go:130] >   "images": [
	I0419 20:37:59.414486  407144 command_runner.go:130] >     {
	I0419 20:37:59.414496  407144 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0419 20:37:59.414504  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414513  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0419 20:37:59.414517  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414524  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414538  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0419 20:37:59.414553  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0419 20:37:59.414559  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414567  407144 command_runner.go:130] >       "size": "65291810",
	I0419 20:37:59.414574  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414580  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414591  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414595  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414599  407144 command_runner.go:130] >     },
	I0419 20:37:59.414604  407144 command_runner.go:130] >     {
	I0419 20:37:59.414613  407144 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0419 20:37:59.414624  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414632  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0419 20:37:59.414642  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414648  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414661  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0419 20:37:59.414671  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0419 20:37:59.414675  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414679  407144 command_runner.go:130] >       "size": "1363676",
	I0419 20:37:59.414683  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414689  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414695  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414699  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414704  407144 command_runner.go:130] >     },
	I0419 20:37:59.414713  407144 command_runner.go:130] >     {
	I0419 20:37:59.414723  407144 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0419 20:37:59.414735  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414747  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0419 20:37:59.414756  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414763  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414775  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0419 20:37:59.414783  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0419 20:37:59.414788  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414796  407144 command_runner.go:130] >       "size": "31470524",
	I0419 20:37:59.414806  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414826  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414833  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414843  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414850  407144 command_runner.go:130] >     },
	I0419 20:37:59.414858  407144 command_runner.go:130] >     {
	I0419 20:37:59.414866  407144 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0419 20:37:59.414873  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414881  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0419 20:37:59.414891  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414903  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414917  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0419 20:37:59.414937  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0419 20:37:59.414946  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414951  407144 command_runner.go:130] >       "size": "61245718",
	I0419 20:37:59.414958  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414965  407144 command_runner.go:130] >       "username": "nonroot",
	I0419 20:37:59.414974  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414981  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414990  407144 command_runner.go:130] >     },
	I0419 20:37:59.414995  407144 command_runner.go:130] >     {
	I0419 20:37:59.415007  407144 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0419 20:37:59.415016  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415094  407144 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0419 20:37:59.415121  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415130  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415151  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0419 20:37:59.415164  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0419 20:37:59.415173  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415188  407144 command_runner.go:130] >       "size": "150779692",
	I0419 20:37:59.415197  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415207  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415218  407144 command_runner.go:130] >       },
	I0419 20:37:59.415226  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415235  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415243  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415247  407144 command_runner.go:130] >     },
	I0419 20:37:59.415255  407144 command_runner.go:130] >     {
	I0419 20:37:59.415269  407144 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0419 20:37:59.415303  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415316  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0419 20:37:59.415321  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415327  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415341  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0419 20:37:59.415357  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0419 20:37:59.415366  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415376  407144 command_runner.go:130] >       "size": "117609952",
	I0419 20:37:59.415385  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415395  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415404  407144 command_runner.go:130] >       },
	I0419 20:37:59.415410  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415417  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415422  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415430  407144 command_runner.go:130] >     },
	I0419 20:37:59.415440  407144 command_runner.go:130] >     {
	I0419 20:37:59.415453  407144 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0419 20:37:59.415465  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415477  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0419 20:37:59.415485  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415495  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415506  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0419 20:37:59.415523  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0419 20:37:59.415533  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415544  407144 command_runner.go:130] >       "size": "112170310",
	I0419 20:37:59.415553  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415564  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415574  407144 command_runner.go:130] >       },
	I0419 20:37:59.415583  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415589  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415595  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415604  407144 command_runner.go:130] >     },
	I0419 20:37:59.415613  407144 command_runner.go:130] >     {
	I0419 20:37:59.415627  407144 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0419 20:37:59.415636  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415648  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0419 20:37:59.415657  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415666  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415689  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0419 20:37:59.415706  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0419 20:37:59.415717  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415727  407144 command_runner.go:130] >       "size": "85932953",
	I0419 20:37:59.415736  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.415746  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415752  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415757  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415760  407144 command_runner.go:130] >     },
	I0419 20:37:59.415763  407144 command_runner.go:130] >     {
	I0419 20:37:59.415776  407144 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0419 20:37:59.415782  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415791  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0419 20:37:59.415801  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415808  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415819  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0419 20:37:59.415832  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0419 20:37:59.415838  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415845  407144 command_runner.go:130] >       "size": "63026502",
	I0419 20:37:59.415850  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415854  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415858  407144 command_runner.go:130] >       },
	I0419 20:37:59.415863  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415869  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415881  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415886  407144 command_runner.go:130] >     },
	I0419 20:37:59.415895  407144 command_runner.go:130] >     {
	I0419 20:37:59.415905  407144 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0419 20:37:59.415915  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415925  407144 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0419 20:37:59.415934  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415941  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415951  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0419 20:37:59.415965  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0419 20:37:59.415978  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415988  407144 command_runner.go:130] >       "size": "750414",
	I0419 20:37:59.415998  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.416008  407144 command_runner.go:130] >         "value": "65535"
	I0419 20:37:59.416013  407144 command_runner.go:130] >       },
	I0419 20:37:59.416019  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.416064  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.416079  407144 command_runner.go:130] >       "pinned": true
	I0419 20:37:59.416087  407144 command_runner.go:130] >     }
	I0419 20:37:59.416096  407144 command_runner.go:130] >   ]
	I0419 20:37:59.416106  407144 command_runner.go:130] > }
	I0419 20:37:59.416436  407144 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:37:59.416452  407144 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:37:59.416514  407144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:37:59.453644  407144 command_runner.go:130] > {
	I0419 20:37:59.453675  407144 command_runner.go:130] >   "images": [
	I0419 20:37:59.453680  407144 command_runner.go:130] >     {
	I0419 20:37:59.453689  407144 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0419 20:37:59.453693  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453699  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0419 20:37:59.453702  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453707  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453717  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0419 20:37:59.453725  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0419 20:37:59.453735  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453742  407144 command_runner.go:130] >       "size": "65291810",
	I0419 20:37:59.453750  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.453756  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.453787  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.453795  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.453799  407144 command_runner.go:130] >     },
	I0419 20:37:59.453802  407144 command_runner.go:130] >     {
	I0419 20:37:59.453808  407144 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0419 20:37:59.453814  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453823  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0419 20:37:59.453829  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453836  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453849  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0419 20:37:59.453863  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0419 20:37:59.453869  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453879  407144 command_runner.go:130] >       "size": "1363676",
	I0419 20:37:59.453883  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.453891  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.453896  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.453903  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.453912  407144 command_runner.go:130] >     },
	I0419 20:37:59.453918  407144 command_runner.go:130] >     {
	I0419 20:37:59.453927  407144 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0419 20:37:59.453937  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453946  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0419 20:37:59.453952  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453959  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453972  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0419 20:37:59.453982  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0419 20:37:59.453987  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453995  407144 command_runner.go:130] >       "size": "31470524",
	I0419 20:37:59.454002  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454012  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454019  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454026  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454034  407144 command_runner.go:130] >     },
	I0419 20:37:59.454040  407144 command_runner.go:130] >     {
	I0419 20:37:59.454053  407144 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0419 20:37:59.454059  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454067  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0419 20:37:59.454072  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454081  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454094  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0419 20:37:59.454114  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0419 20:37:59.454124  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454130  407144 command_runner.go:130] >       "size": "61245718",
	I0419 20:37:59.454139  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454146  407144 command_runner.go:130] >       "username": "nonroot",
	I0419 20:37:59.454156  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454164  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454173  407144 command_runner.go:130] >     },
	I0419 20:37:59.454179  407144 command_runner.go:130] >     {
	I0419 20:37:59.454193  407144 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0419 20:37:59.454199  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454210  407144 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0419 20:37:59.454218  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454224  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454236  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0419 20:37:59.454248  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0419 20:37:59.454257  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454264  407144 command_runner.go:130] >       "size": "150779692",
	I0419 20:37:59.454284  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454291  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454300  407144 command_runner.go:130] >       },
	I0419 20:37:59.454307  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454316  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454321  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454324  407144 command_runner.go:130] >     },
	I0419 20:37:59.454328  407144 command_runner.go:130] >     {
	I0419 20:37:59.454337  407144 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0419 20:37:59.454348  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454358  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0419 20:37:59.454367  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454374  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454389  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0419 20:37:59.454404  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0419 20:37:59.454411  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454415  407144 command_runner.go:130] >       "size": "117609952",
	I0419 20:37:59.454424  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454430  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454439  407144 command_runner.go:130] >       },
	I0419 20:37:59.454445  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454455  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454461  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454470  407144 command_runner.go:130] >     },
	I0419 20:37:59.454475  407144 command_runner.go:130] >     {
	I0419 20:37:59.454488  407144 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0419 20:37:59.454496  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454502  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0419 20:37:59.454511  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454518  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454535  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0419 20:37:59.454551  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0419 20:37:59.454564  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454574  407144 command_runner.go:130] >       "size": "112170310",
	I0419 20:37:59.454581  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454585  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454589  407144 command_runner.go:130] >       },
	I0419 20:37:59.454596  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454605  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454612  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454620  407144 command_runner.go:130] >     },
	I0419 20:37:59.454626  407144 command_runner.go:130] >     {
	I0419 20:37:59.454638  407144 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0419 20:37:59.454645  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454656  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0419 20:37:59.454663  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454669  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454690  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0419 20:37:59.454706  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0419 20:37:59.454715  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454722  407144 command_runner.go:130] >       "size": "85932953",
	I0419 20:37:59.454733  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454740  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454750  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454756  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454764  407144 command_runner.go:130] >     },
	I0419 20:37:59.454774  407144 command_runner.go:130] >     {
	I0419 20:37:59.454788  407144 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0419 20:37:59.454798  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454807  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0419 20:37:59.454815  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454822  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454836  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0419 20:37:59.454847  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0419 20:37:59.454856  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454863  407144 command_runner.go:130] >       "size": "63026502",
	I0419 20:37:59.454872  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454879  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454888  407144 command_runner.go:130] >       },
	I0419 20:37:59.454895  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454901  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454911  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454917  407144 command_runner.go:130] >     },
	I0419 20:37:59.454930  407144 command_runner.go:130] >     {
	I0419 20:37:59.454943  407144 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0419 20:37:59.454953  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454961  407144 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0419 20:37:59.454969  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454976  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454990  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0419 20:37:59.455007  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0419 20:37:59.455014  407144 command_runner.go:130] >       ],
	I0419 20:37:59.455019  407144 command_runner.go:130] >       "size": "750414",
	I0419 20:37:59.455025  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.455035  407144 command_runner.go:130] >         "value": "65535"
	I0419 20:37:59.455043  407144 command_runner.go:130] >       },
	I0419 20:37:59.455049  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.455058  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.455063  407144 command_runner.go:130] >       "pinned": true
	I0419 20:37:59.455068  407144 command_runner.go:130] >     }
	I0419 20:37:59.455072  407144 command_runner.go:130] >   ]
	I0419 20:37:59.455079  407144 command_runner.go:130] > }
	I0419 20:37:59.455323  407144 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:37:59.455342  407144 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:37:59.455350  407144 kubeadm.go:928] updating node { 192.168.39.193 8443 v1.30.0 crio true true} ...
	I0419 20:37:59.455461  407144 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-151935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:37:59.455530  407144 ssh_runner.go:195] Run: crio config
	I0419 20:37:59.488235  407144 command_runner.go:130] ! time="2024-04-19 20:37:59.470103506Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0419 20:37:59.494800  407144 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0419 20:37:59.507846  407144 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0419 20:37:59.507870  407144 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0419 20:37:59.507876  407144 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0419 20:37:59.507880  407144 command_runner.go:130] > #
	I0419 20:37:59.507886  407144 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0419 20:37:59.507897  407144 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0419 20:37:59.507903  407144 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0419 20:37:59.507913  407144 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0419 20:37:59.507920  407144 command_runner.go:130] > # reload'.
	I0419 20:37:59.507926  407144 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0419 20:37:59.507931  407144 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0419 20:37:59.507937  407144 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0419 20:37:59.507943  407144 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0419 20:37:59.507950  407144 command_runner.go:130] > [crio]
	I0419 20:37:59.507956  407144 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0419 20:37:59.507963  407144 command_runner.go:130] > # containers images, in this directory.
	I0419 20:37:59.507968  407144 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0419 20:37:59.507980  407144 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0419 20:37:59.507987  407144 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0419 20:37:59.507995  407144 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0419 20:37:59.508001  407144 command_runner.go:130] > # imagestore = ""
	I0419 20:37:59.508007  407144 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0419 20:37:59.508016  407144 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0419 20:37:59.508020  407144 command_runner.go:130] > storage_driver = "overlay"
	I0419 20:37:59.508025  407144 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0419 20:37:59.508033  407144 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0419 20:37:59.508039  407144 command_runner.go:130] > storage_option = [
	I0419 20:37:59.508044  407144 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0419 20:37:59.508050  407144 command_runner.go:130] > ]
	I0419 20:37:59.508057  407144 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0419 20:37:59.508065  407144 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0419 20:37:59.508070  407144 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0419 20:37:59.508075  407144 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0419 20:37:59.508083  407144 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0419 20:37:59.508091  407144 command_runner.go:130] > # always happen on a node reboot
	I0419 20:37:59.508095  407144 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0419 20:37:59.508106  407144 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0419 20:37:59.508114  407144 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0419 20:37:59.508121  407144 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0419 20:37:59.508126  407144 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0419 20:37:59.508135  407144 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0419 20:37:59.508150  407144 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0419 20:37:59.508156  407144 command_runner.go:130] > # internal_wipe = true
	I0419 20:37:59.508163  407144 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0419 20:37:59.508171  407144 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0419 20:37:59.508175  407144 command_runner.go:130] > # internal_repair = false
	I0419 20:37:59.508180  407144 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0419 20:37:59.508188  407144 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0419 20:37:59.508196  407144 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0419 20:37:59.508202  407144 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0419 20:37:59.508212  407144 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0419 20:37:59.508218  407144 command_runner.go:130] > [crio.api]
	I0419 20:37:59.508224  407144 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0419 20:37:59.508230  407144 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0419 20:37:59.508235  407144 command_runner.go:130] > # IP address on which the stream server will listen.
	I0419 20:37:59.508241  407144 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0419 20:37:59.508248  407144 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0419 20:37:59.508255  407144 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0419 20:37:59.508259  407144 command_runner.go:130] > # stream_port = "0"
	I0419 20:37:59.508267  407144 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0419 20:37:59.508271  407144 command_runner.go:130] > # stream_enable_tls = false
	I0419 20:37:59.508279  407144 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0419 20:37:59.508283  407144 command_runner.go:130] > # stream_idle_timeout = ""
	I0419 20:37:59.508292  407144 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0419 20:37:59.508301  407144 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0419 20:37:59.508307  407144 command_runner.go:130] > # minutes.
	I0419 20:37:59.508311  407144 command_runner.go:130] > # stream_tls_cert = ""
	I0419 20:37:59.508319  407144 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0419 20:37:59.508327  407144 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0419 20:37:59.508331  407144 command_runner.go:130] > # stream_tls_key = ""
	I0419 20:37:59.508339  407144 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0419 20:37:59.508345  407144 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0419 20:37:59.508367  407144 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0419 20:37:59.508375  407144 command_runner.go:130] > # stream_tls_ca = ""
	I0419 20:37:59.508382  407144 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0419 20:37:59.508386  407144 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0419 20:37:59.508395  407144 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0419 20:37:59.508412  407144 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0419 20:37:59.508421  407144 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0419 20:37:59.508428  407144 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0419 20:37:59.508432  407144 command_runner.go:130] > [crio.runtime]
	I0419 20:37:59.508438  407144 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0419 20:37:59.508446  407144 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0419 20:37:59.508450  407144 command_runner.go:130] > # "nofile=1024:2048"
	I0419 20:37:59.508457  407144 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0419 20:37:59.508463  407144 command_runner.go:130] > # default_ulimits = [
	I0419 20:37:59.508466  407144 command_runner.go:130] > # ]
	I0419 20:37:59.508473  407144 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0419 20:37:59.508479  407144 command_runner.go:130] > # no_pivot = false
	I0419 20:37:59.508486  407144 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0419 20:37:59.508496  407144 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0419 20:37:59.508503  407144 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0419 20:37:59.508508  407144 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0419 20:37:59.508515  407144 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0419 20:37:59.508522  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0419 20:37:59.508528  407144 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0419 20:37:59.508532  407144 command_runner.go:130] > # Cgroup setting for conmon
	I0419 20:37:59.508541  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0419 20:37:59.508551  407144 command_runner.go:130] > conmon_cgroup = "pod"
	I0419 20:37:59.508559  407144 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0419 20:37:59.508564  407144 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0419 20:37:59.508573  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0419 20:37:59.508577  407144 command_runner.go:130] > conmon_env = [
	I0419 20:37:59.508583  407144 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0419 20:37:59.508588  407144 command_runner.go:130] > ]
	I0419 20:37:59.508594  407144 command_runner.go:130] > # Additional environment variables to set for all the
	I0419 20:37:59.508601  407144 command_runner.go:130] > # containers. These are overridden if set in the
	I0419 20:37:59.508606  407144 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0419 20:37:59.508613  407144 command_runner.go:130] > # default_env = [
	I0419 20:37:59.508617  407144 command_runner.go:130] > # ]
	I0419 20:37:59.508625  407144 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0419 20:37:59.508650  407144 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0419 20:37:59.508660  407144 command_runner.go:130] > # selinux = false
	I0419 20:37:59.508673  407144 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0419 20:37:59.508682  407144 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0419 20:37:59.508690  407144 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0419 20:37:59.508696  407144 command_runner.go:130] > # seccomp_profile = ""
	I0419 20:37:59.508702  407144 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0419 20:37:59.508710  407144 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0419 20:37:59.508716  407144 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0419 20:37:59.508723  407144 command_runner.go:130] > # which might increase security.
	I0419 20:37:59.508728  407144 command_runner.go:130] > # This option is currently deprecated,
	I0419 20:37:59.508736  407144 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0419 20:37:59.508740  407144 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0419 20:37:59.508749  407144 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0419 20:37:59.508755  407144 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0419 20:37:59.508765  407144 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0419 20:37:59.508771  407144 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0419 20:37:59.508778  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.508783  407144 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0419 20:37:59.508791  407144 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0419 20:37:59.508795  407144 command_runner.go:130] > # the cgroup blockio controller.
	I0419 20:37:59.508802  407144 command_runner.go:130] > # blockio_config_file = ""
	I0419 20:37:59.508808  407144 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0419 20:37:59.508814  407144 command_runner.go:130] > # blockio parameters.
	I0419 20:37:59.508818  407144 command_runner.go:130] > # blockio_reload = false
	I0419 20:37:59.508823  407144 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0419 20:37:59.508830  407144 command_runner.go:130] > # irqbalance daemon.
	I0419 20:37:59.508835  407144 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0419 20:37:59.508843  407144 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0419 20:37:59.508849  407144 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0419 20:37:59.508858  407144 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0419 20:37:59.508866  407144 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0419 20:37:59.508872  407144 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0419 20:37:59.508880  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.508884  407144 command_runner.go:130] > # rdt_config_file = ""
	I0419 20:37:59.508892  407144 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0419 20:37:59.508896  407144 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0419 20:37:59.508921  407144 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0419 20:37:59.508931  407144 command_runner.go:130] > # separate_pull_cgroup = ""
	I0419 20:37:59.508939  407144 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0419 20:37:59.508945  407144 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0419 20:37:59.508952  407144 command_runner.go:130] > # will be added.
	I0419 20:37:59.508955  407144 command_runner.go:130] > # default_capabilities = [
	I0419 20:37:59.508961  407144 command_runner.go:130] > # 	"CHOWN",
	I0419 20:37:59.508965  407144 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0419 20:37:59.508971  407144 command_runner.go:130] > # 	"FSETID",
	I0419 20:37:59.508974  407144 command_runner.go:130] > # 	"FOWNER",
	I0419 20:37:59.508980  407144 command_runner.go:130] > # 	"SETGID",
	I0419 20:37:59.508987  407144 command_runner.go:130] > # 	"SETUID",
	I0419 20:37:59.508993  407144 command_runner.go:130] > # 	"SETPCAP",
	I0419 20:37:59.508997  407144 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0419 20:37:59.509003  407144 command_runner.go:130] > # 	"KILL",
	I0419 20:37:59.509007  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509014  407144 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0419 20:37:59.509023  407144 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0419 20:37:59.509030  407144 command_runner.go:130] > # add_inheritable_capabilities = false
	I0419 20:37:59.509038  407144 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0419 20:37:59.509045  407144 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0419 20:37:59.509050  407144 command_runner.go:130] > default_sysctls = [
	I0419 20:37:59.509055  407144 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0419 20:37:59.509060  407144 command_runner.go:130] > ]
	I0419 20:37:59.509067  407144 command_runner.go:130] > # List of devices on the host that a
	I0419 20:37:59.509075  407144 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0419 20:37:59.509082  407144 command_runner.go:130] > # allowed_devices = [
	I0419 20:37:59.509086  407144 command_runner.go:130] > # 	"/dev/fuse",
	I0419 20:37:59.509092  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509097  407144 command_runner.go:130] > # List of additional devices. specified as
	I0419 20:37:59.509106  407144 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0419 20:37:59.509113  407144 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0419 20:37:59.509121  407144 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0419 20:37:59.509126  407144 command_runner.go:130] > # additional_devices = [
	I0419 20:37:59.509130  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509137  407144 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0419 20:37:59.509141  407144 command_runner.go:130] > # cdi_spec_dirs = [
	I0419 20:37:59.509147  407144 command_runner.go:130] > # 	"/etc/cdi",
	I0419 20:37:59.509151  407144 command_runner.go:130] > # 	"/var/run/cdi",
	I0419 20:37:59.509155  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509163  407144 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0419 20:37:59.509171  407144 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0419 20:37:59.509176  407144 command_runner.go:130] > # Defaults to false.
	I0419 20:37:59.509181  407144 command_runner.go:130] > # device_ownership_from_security_context = false
	I0419 20:37:59.509189  407144 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0419 20:37:59.509195  407144 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0419 20:37:59.509201  407144 command_runner.go:130] > # hooks_dir = [
	I0419 20:37:59.509205  407144 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0419 20:37:59.509211  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509217  407144 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0419 20:37:59.509226  407144 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0419 20:37:59.509233  407144 command_runner.go:130] > # its default mounts from the following two files:
	I0419 20:37:59.509236  407144 command_runner.go:130] > #
	I0419 20:37:59.509242  407144 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0419 20:37:59.509250  407144 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0419 20:37:59.509258  407144 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0419 20:37:59.509264  407144 command_runner.go:130] > #
	I0419 20:37:59.509269  407144 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0419 20:37:59.509277  407144 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0419 20:37:59.509283  407144 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0419 20:37:59.509293  407144 command_runner.go:130] > #      only add mounts it finds in this file.
	I0419 20:37:59.509299  407144 command_runner.go:130] > #
	I0419 20:37:59.509303  407144 command_runner.go:130] > # default_mounts_file = ""
	I0419 20:37:59.509310  407144 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0419 20:37:59.509317  407144 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0419 20:37:59.509320  407144 command_runner.go:130] > pids_limit = 1024
	I0419 20:37:59.509329  407144 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0419 20:37:59.509337  407144 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0419 20:37:59.509346  407144 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0419 20:37:59.509355  407144 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0419 20:37:59.509361  407144 command_runner.go:130] > # log_size_max = -1
	I0419 20:37:59.509368  407144 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0419 20:37:59.509374  407144 command_runner.go:130] > # log_to_journald = false
	I0419 20:37:59.509381  407144 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0419 20:37:59.509389  407144 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0419 20:37:59.509396  407144 command_runner.go:130] > # Path to directory for container attach sockets.
	I0419 20:37:59.509402  407144 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0419 20:37:59.509413  407144 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0419 20:37:59.509419  407144 command_runner.go:130] > # bind_mount_prefix = ""
	I0419 20:37:59.509424  407144 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0419 20:37:59.509430  407144 command_runner.go:130] > # read_only = false
	I0419 20:37:59.509436  407144 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0419 20:37:59.509444  407144 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0419 20:37:59.509449  407144 command_runner.go:130] > # live configuration reload.
	I0419 20:37:59.509453  407144 command_runner.go:130] > # log_level = "info"
	I0419 20:37:59.509461  407144 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0419 20:37:59.509469  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.509473  407144 command_runner.go:130] > # log_filter = ""
	I0419 20:37:59.509481  407144 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0419 20:37:59.509490  407144 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0419 20:37:59.509496  407144 command_runner.go:130] > # separated by comma.
	I0419 20:37:59.509504  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509510  407144 command_runner.go:130] > # uid_mappings = ""
	I0419 20:37:59.509515  407144 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0419 20:37:59.509523  407144 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0419 20:37:59.509527  407144 command_runner.go:130] > # separated by comma.
	I0419 20:37:59.509536  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509544  407144 command_runner.go:130] > # gid_mappings = ""
	I0419 20:37:59.509553  407144 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0419 20:37:59.509561  407144 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0419 20:37:59.509568  407144 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0419 20:37:59.509577  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509583  407144 command_runner.go:130] > # minimum_mappable_uid = -1
	I0419 20:37:59.509589  407144 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0419 20:37:59.509598  407144 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0419 20:37:59.509606  407144 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0419 20:37:59.509613  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509620  407144 command_runner.go:130] > # minimum_mappable_gid = -1
	I0419 20:37:59.509626  407144 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0419 20:37:59.509635  407144 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0419 20:37:59.509642  407144 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0419 20:37:59.509650  407144 command_runner.go:130] > # ctr_stop_timeout = 30
	I0419 20:37:59.509655  407144 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0419 20:37:59.509663  407144 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0419 20:37:59.509669  407144 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0419 20:37:59.509676  407144 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0419 20:37:59.509680  407144 command_runner.go:130] > drop_infra_ctr = false
	I0419 20:37:59.509688  407144 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0419 20:37:59.509693  407144 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0419 20:37:59.509702  407144 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0419 20:37:59.509709  407144 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0419 20:37:59.509715  407144 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0419 20:37:59.509723  407144 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0419 20:37:59.509731  407144 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0419 20:37:59.509736  407144 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0419 20:37:59.509741  407144 command_runner.go:130] > # shared_cpuset = ""
	I0419 20:37:59.509747  407144 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0419 20:37:59.509754  407144 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0419 20:37:59.509758  407144 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0419 20:37:59.509765  407144 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0419 20:37:59.509771  407144 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0419 20:37:59.509776  407144 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0419 20:37:59.509787  407144 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0419 20:37:59.509791  407144 command_runner.go:130] > # enable_criu_support = false
	I0419 20:37:59.509796  407144 command_runner.go:130] > # Enable/disable the generation of the container,
	I0419 20:37:59.509802  407144 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0419 20:37:59.509806  407144 command_runner.go:130] > # enable_pod_events = false
	I0419 20:37:59.509812  407144 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0419 20:37:59.509821  407144 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0419 20:37:59.509825  407144 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0419 20:37:59.509832  407144 command_runner.go:130] > # default_runtime = "runc"
	I0419 20:37:59.509837  407144 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0419 20:37:59.509846  407144 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0419 20:37:59.509857  407144 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0419 20:37:59.509864  407144 command_runner.go:130] > # creation as a file is not desired either.
	I0419 20:37:59.509873  407144 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0419 20:37:59.509880  407144 command_runner.go:130] > # the hostname is being managed dynamically.
	I0419 20:37:59.509885  407144 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0419 20:37:59.509891  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509896  407144 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0419 20:37:59.509905  407144 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0419 20:37:59.509912  407144 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0419 20:37:59.509920  407144 command_runner.go:130] > # Each entry in the table should follow the format:
	I0419 20:37:59.509923  407144 command_runner.go:130] > #
	I0419 20:37:59.509927  407144 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0419 20:37:59.509934  407144 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0419 20:37:59.509974  407144 command_runner.go:130] > # runtime_type = "oci"
	I0419 20:37:59.509982  407144 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0419 20:37:59.509987  407144 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0419 20:37:59.509991  407144 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0419 20:37:59.509995  407144 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0419 20:37:59.509999  407144 command_runner.go:130] > # monitor_env = []
	I0419 20:37:59.510006  407144 command_runner.go:130] > # privileged_without_host_devices = false
	I0419 20:37:59.510013  407144 command_runner.go:130] > # allowed_annotations = []
	I0419 20:37:59.510018  407144 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0419 20:37:59.510024  407144 command_runner.go:130] > # Where:
	I0419 20:37:59.510029  407144 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0419 20:37:59.510038  407144 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0419 20:37:59.510044  407144 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0419 20:37:59.510052  407144 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0419 20:37:59.510059  407144 command_runner.go:130] > #   in $PATH.
	I0419 20:37:59.510069  407144 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0419 20:37:59.510076  407144 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0419 20:37:59.510082  407144 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0419 20:37:59.510088  407144 command_runner.go:130] > #   state.
	I0419 20:37:59.510094  407144 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0419 20:37:59.510102  407144 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0419 20:37:59.510110  407144 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0419 20:37:59.510119  407144 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0419 20:37:59.510126  407144 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0419 20:37:59.510134  407144 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0419 20:37:59.510146  407144 command_runner.go:130] > #   The currently recognized values are:
	I0419 20:37:59.510155  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0419 20:37:59.510164  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0419 20:37:59.510170  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0419 20:37:59.510178  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0419 20:37:59.510187  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0419 20:37:59.510196  407144 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0419 20:37:59.510204  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0419 20:37:59.510212  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0419 20:37:59.510220  407144 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0419 20:37:59.510229  407144 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0419 20:37:59.510233  407144 command_runner.go:130] > #   deprecated option "conmon".
	I0419 20:37:59.510242  407144 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0419 20:37:59.510248  407144 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0419 20:37:59.510254  407144 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0419 20:37:59.510262  407144 command_runner.go:130] > #   should be moved to the container's cgroup
	I0419 20:37:59.510268  407144 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0419 20:37:59.510275  407144 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0419 20:37:59.510282  407144 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0419 20:37:59.510289  407144 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0419 20:37:59.510292  407144 command_runner.go:130] > #
	I0419 20:37:59.510304  407144 command_runner.go:130] > # Using the seccomp notifier feature:
	I0419 20:37:59.510312  407144 command_runner.go:130] > #
	I0419 20:37:59.510320  407144 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0419 20:37:59.510329  407144 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0419 20:37:59.510334  407144 command_runner.go:130] > #
	I0419 20:37:59.510340  407144 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0419 20:37:59.510348  407144 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0419 20:37:59.510354  407144 command_runner.go:130] > #
	I0419 20:37:59.510360  407144 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0419 20:37:59.510365  407144 command_runner.go:130] > # feature.
	I0419 20:37:59.510374  407144 command_runner.go:130] > #
	I0419 20:37:59.510382  407144 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0419 20:37:59.510389  407144 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0419 20:37:59.510397  407144 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0419 20:37:59.510409  407144 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0419 20:37:59.510421  407144 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0419 20:37:59.510426  407144 command_runner.go:130] > #
	I0419 20:37:59.510433  407144 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0419 20:37:59.510440  407144 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0419 20:37:59.510444  407144 command_runner.go:130] > #
	I0419 20:37:59.510450  407144 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0419 20:37:59.510458  407144 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0419 20:37:59.510464  407144 command_runner.go:130] > #
	I0419 20:37:59.510470  407144 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0419 20:37:59.510478  407144 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0419 20:37:59.510482  407144 command_runner.go:130] > # limitation.
	I0419 20:37:59.510488  407144 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0419 20:37:59.510493  407144 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0419 20:37:59.510497  407144 command_runner.go:130] > runtime_type = "oci"
	I0419 20:37:59.510503  407144 command_runner.go:130] > runtime_root = "/run/runc"
	I0419 20:37:59.510508  407144 command_runner.go:130] > runtime_config_path = ""
	I0419 20:37:59.510515  407144 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0419 20:37:59.510522  407144 command_runner.go:130] > monitor_cgroup = "pod"
	I0419 20:37:59.510526  407144 command_runner.go:130] > monitor_exec_cgroup = ""
	I0419 20:37:59.510531  407144 command_runner.go:130] > monitor_env = [
	I0419 20:37:59.510537  407144 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0419 20:37:59.510542  407144 command_runner.go:130] > ]
	I0419 20:37:59.510547  407144 command_runner.go:130] > privileged_without_host_devices = false
	I0419 20:37:59.510555  407144 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0419 20:37:59.510563  407144 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0419 20:37:59.510569  407144 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0419 20:37:59.510576  407144 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0419 20:37:59.510588  407144 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0419 20:37:59.510596  407144 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0419 20:37:59.510607  407144 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0419 20:37:59.510617  407144 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0419 20:37:59.510624  407144 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0419 20:37:59.510634  407144 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0419 20:37:59.510640  407144 command_runner.go:130] > # Example:
	I0419 20:37:59.510644  407144 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0419 20:37:59.510652  407144 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0419 20:37:59.510661  407144 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0419 20:37:59.510669  407144 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0419 20:37:59.510675  407144 command_runner.go:130] > # cpuset = 0
	I0419 20:37:59.510679  407144 command_runner.go:130] > # cpushares = "0-1"
	I0419 20:37:59.510685  407144 command_runner.go:130] > # Where:
	I0419 20:37:59.510690  407144 command_runner.go:130] > # The workload name is workload-type.
	I0419 20:37:59.510699  407144 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0419 20:37:59.510706  407144 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0419 20:37:59.510714  407144 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0419 20:37:59.510721  407144 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0419 20:37:59.510729  407144 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0419 20:37:59.510734  407144 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0419 20:37:59.510740  407144 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0419 20:37:59.510747  407144 command_runner.go:130] > # Default value is set to true
	I0419 20:37:59.510751  407144 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0419 20:37:59.510757  407144 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0419 20:37:59.510762  407144 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0419 20:37:59.510766  407144 command_runner.go:130] > # Default value is set to 'false'
	I0419 20:37:59.510773  407144 command_runner.go:130] > # disable_hostport_mapping = false
	I0419 20:37:59.510779  407144 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0419 20:37:59.510782  407144 command_runner.go:130] > #
	I0419 20:37:59.510788  407144 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0419 20:37:59.510793  407144 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0419 20:37:59.510799  407144 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0419 20:37:59.510805  407144 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0419 20:37:59.510812  407144 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0419 20:37:59.510816  407144 command_runner.go:130] > [crio.image]
	I0419 20:37:59.510821  407144 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0419 20:37:59.510825  407144 command_runner.go:130] > # default_transport = "docker://"
	I0419 20:37:59.510831  407144 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0419 20:37:59.510837  407144 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0419 20:37:59.510840  407144 command_runner.go:130] > # global_auth_file = ""
	I0419 20:37:59.510845  407144 command_runner.go:130] > # The image used to instantiate infra containers.
	I0419 20:37:59.510850  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.510854  407144 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0419 20:37:59.510860  407144 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0419 20:37:59.510869  407144 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0419 20:37:59.510873  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.510877  407144 command_runner.go:130] > # pause_image_auth_file = ""
	I0419 20:37:59.510882  407144 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0419 20:37:59.510888  407144 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0419 20:37:59.510893  407144 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0419 20:37:59.510899  407144 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0419 20:37:59.510902  407144 command_runner.go:130] > # pause_command = "/pause"
	I0419 20:37:59.510908  407144 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0419 20:37:59.510913  407144 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0419 20:37:59.510918  407144 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0419 20:37:59.510924  407144 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0419 20:37:59.510929  407144 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0419 20:37:59.510935  407144 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0419 20:37:59.510938  407144 command_runner.go:130] > # pinned_images = [
	I0419 20:37:59.510941  407144 command_runner.go:130] > # ]
	I0419 20:37:59.510946  407144 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0419 20:37:59.510952  407144 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0419 20:37:59.510958  407144 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0419 20:37:59.510964  407144 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0419 20:37:59.510968  407144 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0419 20:37:59.510972  407144 command_runner.go:130] > # signature_policy = ""
	I0419 20:37:59.510977  407144 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0419 20:37:59.510983  407144 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0419 20:37:59.510991  407144 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0419 20:37:59.511003  407144 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0419 20:37:59.511010  407144 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0419 20:37:59.511015  407144 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0419 20:37:59.511023  407144 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0419 20:37:59.511031  407144 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0419 20:37:59.511037  407144 command_runner.go:130] > # changing them here.
	I0419 20:37:59.511041  407144 command_runner.go:130] > # insecure_registries = [
	I0419 20:37:59.511046  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511053  407144 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0419 20:37:59.511060  407144 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0419 20:37:59.511064  407144 command_runner.go:130] > # image_volumes = "mkdir"
	I0419 20:37:59.511073  407144 command_runner.go:130] > # Temporary directory to use for storing big files
	I0419 20:37:59.511080  407144 command_runner.go:130] > # big_files_temporary_dir = ""
	I0419 20:37:59.511086  407144 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0419 20:37:59.511092  407144 command_runner.go:130] > # CNI plugins.
	I0419 20:37:59.511095  407144 command_runner.go:130] > [crio.network]
	I0419 20:37:59.511103  407144 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0419 20:37:59.511109  407144 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0419 20:37:59.511116  407144 command_runner.go:130] > # cni_default_network = ""
	I0419 20:37:59.511122  407144 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0419 20:37:59.511128  407144 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0419 20:37:59.511134  407144 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0419 20:37:59.511139  407144 command_runner.go:130] > # plugin_dirs = [
	I0419 20:37:59.511143  407144 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0419 20:37:59.511146  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511152  407144 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0419 20:37:59.511158  407144 command_runner.go:130] > [crio.metrics]
	I0419 20:37:59.511163  407144 command_runner.go:130] > # Globally enable or disable metrics support.
	I0419 20:37:59.511169  407144 command_runner.go:130] > enable_metrics = true
	I0419 20:37:59.511174  407144 command_runner.go:130] > # Specify enabled metrics collectors.
	I0419 20:37:59.511180  407144 command_runner.go:130] > # Per default all metrics are enabled.
	I0419 20:37:59.511186  407144 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0419 20:37:59.511194  407144 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0419 20:37:59.511202  407144 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0419 20:37:59.511208  407144 command_runner.go:130] > # metrics_collectors = [
	I0419 20:37:59.511212  407144 command_runner.go:130] > # 	"operations",
	I0419 20:37:59.511219  407144 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0419 20:37:59.511223  407144 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0419 20:37:59.511230  407144 command_runner.go:130] > # 	"operations_errors",
	I0419 20:37:59.511234  407144 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0419 20:37:59.511240  407144 command_runner.go:130] > # 	"image_pulls_by_name",
	I0419 20:37:59.511245  407144 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0419 20:37:59.511254  407144 command_runner.go:130] > # 	"image_pulls_failures",
	I0419 20:37:59.511260  407144 command_runner.go:130] > # 	"image_pulls_successes",
	I0419 20:37:59.511264  407144 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0419 20:37:59.511271  407144 command_runner.go:130] > # 	"image_layer_reuse",
	I0419 20:37:59.511276  407144 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0419 20:37:59.511286  407144 command_runner.go:130] > # 	"containers_oom_total",
	I0419 20:37:59.511293  407144 command_runner.go:130] > # 	"containers_oom",
	I0419 20:37:59.511297  407144 command_runner.go:130] > # 	"processes_defunct",
	I0419 20:37:59.511303  407144 command_runner.go:130] > # 	"operations_total",
	I0419 20:37:59.511307  407144 command_runner.go:130] > # 	"operations_latency_seconds",
	I0419 20:37:59.511311  407144 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0419 20:37:59.511318  407144 command_runner.go:130] > # 	"operations_errors_total",
	I0419 20:37:59.511322  407144 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0419 20:37:59.511329  407144 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0419 20:37:59.511333  407144 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0419 20:37:59.511338  407144 command_runner.go:130] > # 	"image_pulls_success_total",
	I0419 20:37:59.511342  407144 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0419 20:37:59.511349  407144 command_runner.go:130] > # 	"containers_oom_count_total",
	I0419 20:37:59.511353  407144 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0419 20:37:59.511359  407144 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0419 20:37:59.511363  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511370  407144 command_runner.go:130] > # The port on which the metrics server will listen.
	I0419 20:37:59.511374  407144 command_runner.go:130] > # metrics_port = 9090
	I0419 20:37:59.511379  407144 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0419 20:37:59.511386  407144 command_runner.go:130] > # metrics_socket = ""
	I0419 20:37:59.511391  407144 command_runner.go:130] > # The certificate for the secure metrics server.
	I0419 20:37:59.511399  407144 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0419 20:37:59.511410  407144 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0419 20:37:59.511417  407144 command_runner.go:130] > # certificate on any modification event.
	I0419 20:37:59.511421  407144 command_runner.go:130] > # metrics_cert = ""
	I0419 20:37:59.511429  407144 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0419 20:37:59.511434  407144 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0419 20:37:59.511441  407144 command_runner.go:130] > # metrics_key = ""
	I0419 20:37:59.511446  407144 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0419 20:37:59.511453  407144 command_runner.go:130] > [crio.tracing]
	I0419 20:37:59.511458  407144 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0419 20:37:59.511464  407144 command_runner.go:130] > # enable_tracing = false
	I0419 20:37:59.511469  407144 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0419 20:37:59.511476  407144 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0419 20:37:59.511482  407144 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0419 20:37:59.511490  407144 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0419 20:37:59.511495  407144 command_runner.go:130] > # CRI-O NRI configuration.
	I0419 20:37:59.511501  407144 command_runner.go:130] > [crio.nri]
	I0419 20:37:59.511506  407144 command_runner.go:130] > # Globally enable or disable NRI.
	I0419 20:37:59.511511  407144 command_runner.go:130] > # enable_nri = false
	I0419 20:37:59.511519  407144 command_runner.go:130] > # NRI socket to listen on.
	I0419 20:37:59.511526  407144 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0419 20:37:59.511531  407144 command_runner.go:130] > # NRI plugin directory to use.
	I0419 20:37:59.511538  407144 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0419 20:37:59.511542  407144 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0419 20:37:59.511549  407144 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0419 20:37:59.511554  407144 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0419 20:37:59.511561  407144 command_runner.go:130] > # nri_disable_connections = false
	I0419 20:37:59.511566  407144 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0419 20:37:59.511573  407144 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0419 20:37:59.511578  407144 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0419 20:37:59.511584  407144 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0419 20:37:59.511590  407144 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0419 20:37:59.511595  407144 command_runner.go:130] > [crio.stats]
	I0419 20:37:59.511601  407144 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0419 20:37:59.511608  407144 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0419 20:37:59.511615  407144 command_runner.go:130] > # stats_collection_period = 0
	I0419 20:37:59.511750  407144 cni.go:84] Creating CNI manager for ""
	I0419 20:37:59.511764  407144 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 20:37:59.511777  407144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:37:59.511806  407144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-151935 NodeName:multinode-151935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:37:59.511938  407144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-151935"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:37:59.512002  407144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:37:59.523409  407144 command_runner.go:130] > kubeadm
	I0419 20:37:59.523427  407144 command_runner.go:130] > kubectl
	I0419 20:37:59.523430  407144 command_runner.go:130] > kubelet
	I0419 20:37:59.523455  407144 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:37:59.523503  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:37:59.534698  407144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0419 20:37:59.552173  407144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:37:59.570130  407144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0419 20:37:59.587269  407144 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0419 20:37:59.591269  407144 command_runner.go:130] > 192.168.39.193	control-plane.minikube.internal
	I0419 20:37:59.591355  407144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:37:59.731442  407144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:37:59.747761  407144 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935 for IP: 192.168.39.193
	I0419 20:37:59.747781  407144 certs.go:194] generating shared ca certs ...
	I0419 20:37:59.747797  407144 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:37:59.747948  407144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:37:59.747996  407144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:37:59.748007  407144 certs.go:256] generating profile certs ...
	I0419 20:37:59.748089  407144 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/client.key
	I0419 20:37:59.748148  407144 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key.e4fd995d
	I0419 20:37:59.748184  407144 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key
	I0419 20:37:59.748197  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:37:59.748212  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:37:59.748224  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:37:59.748236  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:37:59.748249  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:37:59.748261  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:37:59.748273  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:37:59.748288  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:37:59.748343  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:37:59.748376  407144 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:37:59.748391  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:37:59.748414  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:37:59.748439  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:37:59.748459  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:37:59.748493  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:37:59.748518  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:37:59.748531  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:37:59.748543  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:37:59.749421  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:37:59.774230  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:37:59.798630  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:37:59.824171  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:37:59.848364  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 20:37:59.872223  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:37:59.897063  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:37:59.921583  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:37:59.945811  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:37:59.970281  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:37:59.994835  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:38:00.033516  407144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:38:00.063556  407144 ssh_runner.go:195] Run: openssl version
	I0419 20:38:00.069802  407144 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 20:38:00.070076  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:38:00.082505  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087294  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087458  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087529  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.093481  407144 command_runner.go:130] > 3ec20f2e
	I0419 20:38:00.093672  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:38:00.105374  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:38:00.117722  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122531  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122568  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122619  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.128767  407144 command_runner.go:130] > b5213941
	I0419 20:38:00.128953  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:38:00.140337  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:38:00.152538  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157285  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157440  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157509  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.163441  407144 command_runner.go:130] > 51391683
	I0419 20:38:00.163515  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:38:00.174520  407144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:38:00.179585  407144 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:38:00.179616  407144 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0419 20:38:00.179623  407144 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0419 20:38:00.179629  407144 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 20:38:00.179639  407144 command_runner.go:130] > Access: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179645  407144 command_runner.go:130] > Modify: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179651  407144 command_runner.go:130] > Change: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179659  407144 command_runner.go:130] >  Birth: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179719  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:38:00.186138  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.186264  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:38:00.192508  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.192713  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:38:00.198887  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.198984  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:38:00.205720  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.205792  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:38:00.212084  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.212158  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:38:00.218077  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.218267  407144 kubeadm.go:391] StartCluster: {Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:38:00.218429  407144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:38:00.218478  407144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:38:00.260412  407144 command_runner.go:130] > ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8
	I0419 20:38:00.260436  407144 command_runner.go:130] > 9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a
	I0419 20:38:00.260442  407144 command_runner.go:130] > 89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8
	I0419 20:38:00.260449  407144 command_runner.go:130] > 24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b
	I0419 20:38:00.260454  407144 command_runner.go:130] > 1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3
	I0419 20:38:00.260460  407144 command_runner.go:130] > 81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec
	I0419 20:38:00.260465  407144 command_runner.go:130] > 9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027
	I0419 20:38:00.260471  407144 command_runner.go:130] > 3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b
	I0419 20:38:00.260487  407144 cri.go:89] found id: "ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8"
	I0419 20:38:00.260494  407144 cri.go:89] found id: "9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a"
	I0419 20:38:00.260497  407144 cri.go:89] found id: "89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8"
	I0419 20:38:00.260500  407144 cri.go:89] found id: "24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b"
	I0419 20:38:00.260503  407144 cri.go:89] found id: "1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3"
	I0419 20:38:00.260506  407144 cri.go:89] found id: "81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec"
	I0419 20:38:00.260509  407144 cri.go:89] found id: "9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027"
	I0419 20:38:00.260512  407144 cri.go:89] found id: "3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b"
	I0419 20:38:00.260514  407144 cri.go:89] found id: ""
	I0419 20:38:00.260561  407144 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.282795651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559164282752879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e60a60d7-7bd1-40b1-a56e-3bfe0a57f6a2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.283720305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=320b5283-b7ca-4243-b2aa-b458aff08ab7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.283784668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=320b5283-b7ca-4243-b2aa-b458aff08ab7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.284348787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=320b5283-b7ca-4243-b2aa-b458aff08ab7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.338540166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74ab63e2-979e-4654-8d88-2e3994c6264d name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.338613145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74ab63e2-979e-4654-8d88-2e3994c6264d name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.340353345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36ff8f1b-1d0e-424d-9d15-1002942f27bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.340748054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559164340726165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36ff8f1b-1d0e-424d-9d15-1002942f27bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.341368778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdc85224-d44c-4f86-bc1d-78500c0a23d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.341446379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdc85224-d44c-4f86-bc1d-78500c0a23d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.341818583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdc85224-d44c-4f86-bc1d-78500c0a23d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.394665622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad73b38c-fb91-4a8d-9207-58a15b903a54 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.394748203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad73b38c-fb91-4a8d-9207-58a15b903a54 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.396388639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bbb7ba4-6a12-402c-8328-5706008548d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.396801407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559164396777041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bbb7ba4-6a12-402c-8328-5706008548d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.397637481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abbe9600-34d9-4b75-822c-6bfdb0ec8829 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.397713986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abbe9600-34d9-4b75-822c-6bfdb0ec8829 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.398232453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abbe9600-34d9-4b75-822c-6bfdb0ec8829 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.450480387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a120380-28c4-4511-8eb2-1a5ad0985dea name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.450556727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a120380-28c4-4511-8eb2-1a5ad0985dea name=/runtime.v1.RuntimeService/Version
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.452475404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2195abdd-796e-414c-a8a9-3f118bb3f69a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.452892684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559164452867645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2195abdd-796e-414c-a8a9-3f118bb3f69a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.454085630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e92d6d4-7f63-4902-b86d-39d70e504678 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.454150819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e92d6d4-7f63-4902-b86d-39d70e504678 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:39:24 multinode-151935 crio[2862]: time="2024-04-19 20:39:24.454542350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e92d6d4-7f63-4902-b86d-39d70e504678 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f6ce16249835       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      43 seconds ago       Running             busybox                   1                   6e273ec8d1d9a       busybox-fc5497c4f-f2s7v
	6de66ff0d75df       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   ee62b2f1b19e2       kindnet-mgj2r
	d00f94da13776       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   56fac3785a6d6       coredns-7db6d8ff4d-7ncj2
	adaf22179ac8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   158ff93c29153       storage-provisioner
	c4cd494c894c1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   e600f5c04a75c       kube-proxy-pfnc8
	da54f2104a830       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   ece428f85ee15       kube-scheduler-multinode-151935
	b6bd063626d64       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   8028d39464849       etcd-multinode-151935
	5e5dc0ad75a91       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   79119ff513a4d       kube-controller-manager-multinode-151935
	1bbb8b32a56a4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   7735d89a349fa       kube-apiserver-multinode-151935
	81914a16099f9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   8b5dc1eab8597       busybox-fc5497c4f-f2s7v
	ae6c0e5292985       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   f1046615fbdaf       coredns-7db6d8ff4d-7ncj2
	9effe8852fc9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   91a88e787bfd3       storage-provisioner
	89d6ab542b25d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   536bb6456a583       kindnet-mgj2r
	24ecb604c74da       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   2309b7e0018ff       kube-proxy-pfnc8
	1e419925fabf2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   99f0597788a12       kube-scheduler-multinode-151935
	81e13f7892581       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   4fa61e4820012       etcd-multinode-151935
	9f504fd220a12       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   b7de6532ad4eb       kube-controller-manager-multinode-151935
	3db906bb1d4a7       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   7a03fdbb6f806       kube-apiserver-multinode-151935
	
	
	==> coredns [ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8] <==
	[INFO] 10.244.1.2:55673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001780232s
	[INFO] 10.244.1.2:42779 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121965s
	[INFO] 10.244.1.2:54976 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087587s
	[INFO] 10.244.1.2:54596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001056972s
	[INFO] 10.244.1.2:49581 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128884s
	[INFO] 10.244.1.2:53346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124796s
	[INFO] 10.244.1.2:60576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082191s
	[INFO] 10.244.0.3:53008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073484s
	[INFO] 10.244.0.3:37312 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049401s
	[INFO] 10.244.0.3:53048 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066487s
	[INFO] 10.244.0.3:56090 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039983s
	[INFO] 10.244.1.2:58089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182872s
	[INFO] 10.244.1.2:37086 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104024s
	[INFO] 10.244.1.2:40482 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093839s
	[INFO] 10.244.1.2:49147 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123093s
	[INFO] 10.244.0.3:49920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118155s
	[INFO] 10.244.0.3:53138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103693s
	[INFO] 10.244.0.3:34890 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083491s
	[INFO] 10.244.0.3:51926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115206s
	[INFO] 10.244.1.2:46032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.1.2:57913 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095258s
	[INFO] 10.244.1.2:45352 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179796s
	[INFO] 10.244.1.2:47384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089741s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41999 - 13478 "HINFO IN 101043259357947176.8207489402340935840. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013216151s
	
	
	==> describe nodes <==
	Name:               multinode-151935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=multinode-151935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_31_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:31:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151935
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:39:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:32:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-151935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb916782ac324c04b03ac6d164cc3d51
	  System UUID:                cb916782-ac32-4c04-b03a-c6d164cc3d51
	  Boot ID:                    21d22713-d4ba-4521-b0fa-24d0e20f332c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2s7v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 coredns-7db6d8ff4d-7ncj2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-multinode-151935                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-mgj2r                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-multinode-151935             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-multinode-151935    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-pfnc8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-multinode-151935             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m13s              kube-proxy       
	  Normal  Starting                 76s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m29s              kubelet          Node multinode-151935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s              kubelet          Node multinode-151935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s              kubelet          Node multinode-151935 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m29s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m16s              node-controller  Node multinode-151935 event: Registered Node multinode-151935 in Controller
	  Normal  NodeReady                7m13s              kubelet          Node multinode-151935 status is now: NodeReady
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-151935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-151935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-151935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node multinode-151935 event: Registered Node multinode-151935 in Controller
	
	
	Name:               multinode-151935-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=multinode-151935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_38_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:38:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:39:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:38:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:38:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:38:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:38:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    multinode-151935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85e078ecfcc44c72b0c2735fb2a58458
	  System UUID:                85e078ec-fcc4-4c72-b0c2-735fb2a58458
	  Boot ID:                    acf7c25a-2ae1-4cfb-acec-9349d36a9a2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zkwq6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-v9lfd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m39s
	  kube-system                 kube-proxy-mb775           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m34s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-151935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-151935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-151935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m30s                  kubelet     Node multinode-151935-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-151935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-151935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-151935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-151935-m02 status is now: NodeReady
	
	
	Name:               multinode-151935-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151935-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=multinode-151935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_39_12_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151935-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:39:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:39:21 +0000   Fri, 19 Apr 2024 20:39:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:39:21 +0000   Fri, 19 Apr 2024 20:39:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:39:21 +0000   Fri, 19 Apr 2024 20:39:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:39:21 +0000   Fri, 19 Apr 2024 20:39:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    multinode-151935-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73bc0f5b941c401e932acfb7535bf84e
	  System UUID:                73bc0f5b-941c-401e-932a-cfb7535bf84e
	  Boot ID:                    25030113-b1d2-4232-8b0e-4b4883c36f9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z6zkf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-proxy-b448r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m47s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m8s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet     Node multinode-151935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m42s                  kubelet     Node multinode-151935-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m12s (x2 over 5m12s)  kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m12s (x2 over 5m12s)  kubelet     Node multinode-151935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m12s (x2 over 5m12s)  kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m4s                   kubelet     Node multinode-151935-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12s (x2 over 13s)      kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 13s)      kubelet     Node multinode-151935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 13s)      kubelet     Node multinode-151935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-151935-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060674] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067881] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.168558] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.148384] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.296625] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.486032] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.057124] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.288351] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.959622] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.084938] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.091112] kauditd_printk_skb: 30 callbacks suppressed
	[Apr19 20:32] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.134814] kauditd_printk_skb: 21 callbacks suppressed
	[ +47.877989] kauditd_printk_skb: 84 callbacks suppressed
	[Apr19 20:37] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.145625] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.179886] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.154164] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.290106] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.748943] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[Apr19 20:38] systemd-fstab-generator[3071]: Ignoring "noauto" option for root device
	[  +5.734286] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.162784] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[  +0.110444] kauditd_printk_skb: 32 callbacks suppressed
	[ +22.465643] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec] <==
	{"level":"info","ts":"2024-04-19T20:32:51.149116Z","caller":"traceutil/trace.go:171","msg":"trace[1960558022] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"263.684207ms","start":"2024-04-19T20:32:50.885353Z","end":"2024-04-19T20:32:51.149037Z","steps":["trace[1960558022] 'process raft request'  (duration: 198.113813ms)","trace[1960558022] 'compare'  (duration: 65.410551ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:32.563288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.668407ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517751175823665660 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151935-m03.17c7c8a933b5563c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151935-m03.17c7c8a933b5563c\" value_size:642 lease:1294379138968889611 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-19T20:33:32.563831Z","caller":"traceutil/trace.go:171","msg":"trace[499971293] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:613; }","duration":"170.950638ms","start":"2024-04-19T20:33:32.392853Z","end":"2024-04-19T20:33:32.563803Z","steps":["trace[499971293] 'read index received'  (duration: 170.37593ms)","trace[499971293] 'applied index is now lower than readState.Index'  (duration: 574.053µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-19T20:33:32.564013Z","caller":"traceutil/trace.go:171","msg":"trace[716437160] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"205.842579ms","start":"2024-04-19T20:33:32.358105Z","end":"2024-04-19T20:33:32.563948Z","steps":["trace[716437160] 'process raft request'  (duration: 205.618467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:33:32.564376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.510753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-19T20:33:32.564486Z","caller":"traceutil/trace.go:171","msg":"trace[7069641] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:582; }","duration":"171.695105ms","start":"2024-04-19T20:33:32.392774Z","end":"2024-04-19T20:33:32.564469Z","steps":["trace[7069641] 'agreement among raft nodes before linearized reading'  (duration: 171.500362ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:32.56605Z","caller":"traceutil/trace.go:171","msg":"trace[708220390] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"250.67648ms","start":"2024-04-19T20:33:32.313267Z","end":"2024-04-19T20:33:32.563943Z","steps":["trace[708220390] 'process raft request'  (duration: 72.575873ms)","trace[708220390] 'compare'  (duration: 176.583148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:37.474154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.239102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151935-m03\" ","response":"range_response_count:1 size:3030"}
	{"level":"info","ts":"2024-04-19T20:33:37.474622Z","caller":"traceutil/trace.go:171","msg":"trace[1654905082] range","detail":"{range_begin:/registry/minions/multinode-151935-m03; range_end:; response_count:1; response_revision:617; }","duration":"109.614708ms","start":"2024-04-19T20:33:37.364857Z","end":"2024-04-19T20:33:37.474472Z","steps":["trace[1654905082] 'range keys from in-memory index tree'  (duration: 109.060984ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.747012Z","caller":"traceutil/trace.go:171","msg":"trace[239417769] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"195.240305ms","start":"2024-04-19T20:33:37.551706Z","end":"2024-04-19T20:33:37.746947Z","steps":["trace[239417769] 'process raft request'  (duration: 195.067773ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.746947Z","caller":"traceutil/trace.go:171","msg":"trace[1071672218] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:654; }","duration":"122.514345ms","start":"2024-04-19T20:33:37.624409Z","end":"2024-04-19T20:33:37.746923Z","steps":["trace[1071672218] 'read index received'  (duration: 122.330153ms)","trace[1071672218] 'applied index is now lower than readState.Index'  (duration: 182.871µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:37.748032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.598935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151935-m03\" ","response":"range_response_count:1 size:3030"}
	{"level":"info","ts":"2024-04-19T20:33:37.748105Z","caller":"traceutil/trace.go:171","msg":"trace[31971872] range","detail":"{range_begin:/registry/minions/multinode-151935-m03; range_end:; response_count:1; response_revision:618; }","duration":"123.703811ms","start":"2024-04-19T20:33:37.624385Z","end":"2024-04-19T20:33:37.748089Z","steps":["trace[31971872] 'agreement among raft nodes before linearized reading'  (duration: 122.653469ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.937297Z","caller":"traceutil/trace.go:171","msg":"trace[740861935] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"125.749175ms","start":"2024-04-19T20:33:37.811528Z","end":"2024-04-19T20:33:37.937277Z","steps":["trace[740861935] 'process raft request'  (duration: 125.640378ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:36:26.750233Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-19T20:36:26.750396Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-151935","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	{"level":"warn","ts":"2024-04-19T20:36:26.750489Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:36:26.750572Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/04/19 20:36:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:36:26.802823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:36:26.803334Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-19T20:36:26.804685Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97ba5874d4d591f6","current-leader-member-id":"97ba5874d4d591f6"}
	{"level":"info","ts":"2024-04-19T20:36:26.806886Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:36:26.807127Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:36:26.807174Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-151935","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	
	
	==> etcd [b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064] <==
	{"level":"info","ts":"2024-04-19T20:38:03.237389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:38:03.237403Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:38:03.237721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 switched to configuration voters=(10933148304205517302)"}
	{"level":"info","ts":"2024-04-19T20:38:03.2378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","added-peer-id":"97ba5874d4d591f6","added-peer-peer-urls":["https://192.168.39.193:2380"]}
	{"level":"info","ts":"2024-04-19T20:38:03.237928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:38:03.238031Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:38:03.278249Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T20:38:03.27858Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:38:03.278652Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:38:03.278899Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:38:03.281Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:38:04.974378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.974439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.974491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.97452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.980075Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:multinode-151935 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:38:04.980091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:38:04.980193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:38:04.980601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T20:38:04.980669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-19T20:38:04.982507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"info","ts":"2024-04-19T20:38:04.982564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:39:25 up 8 min,  0 users,  load average: 0.18, 0.16, 0.09
	Linux multinode-151935 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1] <==
	I0419 20:38:38.630343       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:38:48.635177       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:38:48.635214       1 main.go:227] handling current node
	I0419 20:38:48.635239       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:38:48.635275       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:38:48.635419       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:38:48.635459       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:38:58.646118       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:38:58.646254       1 main.go:227] handling current node
	I0419 20:38:58.646283       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:38:58.646319       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:38:58.646470       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:38:58.646497       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:39:08.686215       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:39:08.686304       1 main.go:227] handling current node
	I0419 20:39:08.686327       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:39:08.686344       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:39:08.686480       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:39:08.686575       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:39:18.695559       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:39:18.695661       1 main.go:227] handling current node
	I0419 20:39:18.695691       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:39:18.695709       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:39:18.695818       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:39:18.695839       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8] <==
	I0419 20:35:41.908446       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:35:51.921307       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:35:51.921478       1 main.go:227] handling current node
	I0419 20:35:51.921509       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:35:51.921529       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:35:51.921672       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:35:51.921694       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:01.932663       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:01.932901       1 main.go:227] handling current node
	I0419 20:36:01.932941       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:01.933053       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:01.933258       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:01.933313       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:11.941239       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:11.941467       1 main.go:227] handling current node
	I0419 20:36:11.941575       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:11.941600       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:11.941746       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:11.941771       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:21.947833       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:21.947929       1 main.go:227] handling current node
	I0419 20:36:21.948014       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:21.948046       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:21.948205       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:21.948234       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0] <==
	I0419 20:38:06.272036       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 20:38:06.464023       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:38:06.464132       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 20:38:06.464160       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 20:38:06.465046       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:38:06.465829       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 20:38:06.466059       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 20:38:06.470831       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 20:38:06.472027       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 20:38:06.472095       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 20:38:06.472127       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:38:06.472149       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:38:06.472172       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:38:06.472195       1 cache.go:39] Caches are synced for autoregister controller
	I0419 20:38:06.475028       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:38:06.475060       1 policy_source.go:224] refreshing policies
	I0419 20:38:06.476511       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:38:07.273070       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 20:38:08.966532       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:38:09.084206       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 20:38:09.096217       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:38:09.169424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 20:38:09.175810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 20:38:19.068063       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 20:38:19.117446       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b] <==
	I0419 20:36:26.759744       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 20:36:26.759983       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 20:36:26.762591       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0419 20:36:26.764248       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764337       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764372       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764406       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0419 20:36:26.765369       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765415       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765426       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765433       1 controller.go:176] quota evaluator worker shutdown
	E0419 20:36:26.767324       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767378       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767413       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767446       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767482       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767516       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767529       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771559       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771637       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771644       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771692       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771669       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771740       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771773       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a] <==
	I0419 20:38:19.476654       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 20:38:19.480190       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 20:38:40.033387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.193871ms"
	I0419 20:38:40.045299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.74393ms"
	I0419 20:38:40.061785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.835276ms"
	I0419 20:38:40.062017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.286µs"
	I0419 20:38:43.632228       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m02\" does not exist"
	I0419 20:38:43.647296       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:38:45.521388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.261µs"
	I0419 20:38:45.576323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.904µs"
	I0419 20:38:45.593157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.085µs"
	I0419 20:38:45.610318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.332µs"
	I0419 20:38:45.620411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.779µs"
	I0419 20:38:45.624344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.061µs"
	I0419 20:38:49.844802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.445µs"
	I0419 20:38:52.523220       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:38:52.543417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.724µs"
	I0419 20:38:52.557866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.917µs"
	I0419 20:38:55.951540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.101942ms"
	I0419 20:38:55.951781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.129µs"
	I0419 20:39:11.041103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:39:12.076511       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:39:12.077148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:39:12.090776       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:39:21.262847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	
	
	==> kube-controller-manager [9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027] <==
	I0419 20:32:45.803048       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:32:48.479454       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151935-m02"
	I0419 20:32:54.606341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:32:56.882556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.3739ms"
	I0419 20:32:56.893401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.739588ms"
	I0419 20:32:56.898274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.208µs"
	I0419 20:32:56.914704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.244µs"
	I0419 20:32:56.931809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.021µs"
	I0419 20:33:00.156434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.739255ms"
	I0419 20:33:00.156527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.809µs"
	I0419 20:33:00.727290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.931358ms"
	I0419 20:33:00.727427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.608µs"
	I0419 20:33:32.569916       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:33:32.570114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:33:32.610336       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:33:33.500891       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151935-m03"
	I0419 20:33:42.143153       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:11.587416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:12.788306       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:34:12.788855       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:12.796538       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.3.0/24"]
	I0419 20:34:20.899683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:58.551807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m03"
	I0419 20:34:58.598115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.440083ms"
	I0419 20:34:58.598363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.141µs"
	
	
	==> kube-proxy [24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b] <==
	I0419 20:32:10.970926       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:32:10.979351       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0419 20:32:11.042429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:32:11.043069       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:32:11.043120       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:32:11.048887       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:32:11.049175       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:32:11.049218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:32:11.051700       1 config.go:192] "Starting service config controller"
	I0419 20:32:11.051743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:32:11.051764       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:32:11.051768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:32:11.053742       1 config.go:319] "Starting node config controller"
	I0419 20:32:11.053775       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:32:11.151886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:32:11.151890       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:32:11.153929       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51] <==
	I0419 20:38:07.983932       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:38:08.013636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0419 20:38:08.088850       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:38:08.089046       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:38:08.089136       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:38:08.097460       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:38:08.097654       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:38:08.097691       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:38:08.103272       1 config.go:192] "Starting service config controller"
	I0419 20:38:08.103306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:38:08.103335       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:38:08.103339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:38:08.103748       1 config.go:319] "Starting node config controller"
	I0419 20:38:08.104021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:38:08.204364       1 shared_informer.go:320] Caches are synced for node config
	I0419 20:38:08.204412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:38:08.204520       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3] <==
	E0419 20:31:53.115807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:53.115934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:31:53.116041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:31:53.120150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:31:53.120266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:31:53.968435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 20:31:53.968507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 20:31:53.981803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:31:53.981853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:31:54.128507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0419 20:31:54.128599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0419 20:31:54.133702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:31:54.133840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:31:54.201290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0419 20:31:54.201362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0419 20:31:54.204575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:31:54.204630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:54.210685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:31:54.210773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:54.227503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:31:54.227615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:31:54.235122       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:31:54.236177       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 20:31:57.200793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0419 20:36:26.748652       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f] <==
	I0419 20:38:03.848100       1 serving.go:380] Generated self-signed cert in-memory
	W0419 20:38:06.345899       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0419 20:38:06.346053       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:38:06.346087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0419 20:38:06.346171       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 20:38:06.391666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 20:38:06.391721       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:38:06.398848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 20:38:06.401563       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 20:38:06.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 20:38:06.401667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:38:06.501925       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 19 20:38:02 multinode-151935 kubelet[3078]: E0419 20:38:02.883160    3078 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.193:8443: connect: connection refused
	Apr 19 20:38:03 multinode-151935 kubelet[3078]: I0419 20:38:03.512436    3078 kubelet_node_status.go:73] "Attempting to register node" node="multinode-151935"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.535188    3078 kubelet_node_status.go:112] "Node was previously registered" node="multinode-151935"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.535596    3078 kubelet_node_status.go:76] "Successfully registered node" node="multinode-151935"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.537100    3078 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.538201    3078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.972904    3078 apiserver.go:52] "Watching apiserver"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.977284    3078 topology_manager.go:215] "Topology Admit Handler" podUID="6a264fca-0e90-4c53-a0e8-baffaa4a5f1d" podNamespace="kube-system" podName="kindnet-mgj2r"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.977610    3078 topology_manager.go:215] "Topology Admit Handler" podUID="4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25" podNamespace="kube-system" podName="kube-proxy-pfnc8"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.977736    3078 topology_manager.go:215] "Topology Admit Handler" podUID="fbff591f-c922-499e-b7a3-b79db23598bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7ncj2"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.977838    3078 topology_manager.go:215] "Topology Admit Handler" podUID="ac701485-7f72-481a-8bd5-2e40f0685d63" podNamespace="kube-system" podName="storage-provisioner"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.978019    3078 topology_manager.go:215] "Topology Admit Handler" podUID="882e1af3-63cf-42b9-ae3b-2ea2280ff033" podNamespace="default" podName="busybox-fc5497c4f-f2s7v"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.004285    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.067630    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-xtables-lock\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068061    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25-lib-modules\") pod \"kube-proxy-pfnc8\" (UID: \"4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25\") " pod="kube-system/kube-proxy-pfnc8"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068207    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac701485-7f72-481a-8bd5-2e40f0685d63-tmp\") pod \"storage-provisioner\" (UID: \"ac701485-7f72-481a-8bd5-2e40f0685d63\") " pod="kube-system/storage-provisioner"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068373    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-lib-modules\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068433    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25-xtables-lock\") pod \"kube-proxy-pfnc8\" (UID: \"4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25\") " pod="kube-system/kube-proxy-pfnc8"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068485    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-cni-cfg\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:15 multinode-151935 kubelet[3078]: I0419 20:38:15.730154    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 19 20:39:02 multinode-151935 kubelet[3078]: E0419 20:39:02.075185    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:39:23.982012  408212 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-151935 -n multinode-151935
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-151935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (302.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 stop
E0419 20:40:13.276322  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151935 stop: exit status 82 (2m0.494581677s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-151935-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-151935 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151935 status: exit status 3 (18.812118471s)

                                                
                                                
-- stdout --
	multinode-151935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151935-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:41:47.749038  408877 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0419 20:41:47.749086  408877 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-151935 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-151935 -n multinode-151935
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-151935 logs -n 25: (1.57127021s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935:/home/docker/cp-test_multinode-151935-m02_multinode-151935.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935 sudo cat                                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m02_multinode-151935.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03:/home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935-m03 sudo cat                                   | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp testdata/cp-test.txt                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935:/home/docker/cp-test_multinode-151935-m03_multinode-151935.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935 sudo cat                                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m03_multinode-151935.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt                       | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m02:/home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n                                                                 | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | multinode-151935-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-151935 ssh -n multinode-151935-m02 sudo cat                                   | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	|         | /home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-151935 node stop m03                                                          | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:33 UTC |
	| node    | multinode-151935 node start                                                             | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:33 UTC | 19 Apr 24 20:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-151935                                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:34 UTC |                     |
	| stop    | -p multinode-151935                                                                     | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:34 UTC |                     |
	| start   | -p multinode-151935                                                                     | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:36 UTC | 19 Apr 24 20:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-151935                                                                | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:39 UTC |                     |
	| node    | multinode-151935 node delete                                                            | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:39 UTC | 19 Apr 24 20:39 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-151935 stop                                                                   | multinode-151935 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:36:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:36:25.803129  407144 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:36:25.803306  407144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:36:25.803315  407144 out.go:304] Setting ErrFile to fd 2...
	I0419 20:36:25.803319  407144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:36:25.803528  407144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:36:25.804417  407144 out.go:298] Setting JSON to false
	I0419 20:36:25.805481  407144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8332,"bootTime":1713550654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:36:25.805566  407144 start.go:139] virtualization: kvm guest
	I0419 20:36:25.808049  407144 out.go:177] * [multinode-151935] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:36:25.809997  407144 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:36:25.809966  407144 notify.go:220] Checking for updates...
	I0419 20:36:25.811273  407144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:36:25.812648  407144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:36:25.814374  407144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:36:25.815864  407144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:36:25.817234  407144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:36:25.818989  407144 config.go:182] Loaded profile config "multinode-151935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:36:25.819109  407144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:36:25.819753  407144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:36:25.819805  407144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:36:25.836258  407144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0419 20:36:25.836702  407144 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:36:25.837313  407144 main.go:141] libmachine: Using API Version  1
	I0419 20:36:25.837351  407144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:36:25.837724  407144 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:36:25.837911  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.874451  407144 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:36:25.875983  407144 start.go:297] selected driver: kvm2
	I0419 20:36:25.876001  407144 start.go:901] validating driver "kvm2" against &{Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:36:25.876223  407144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:36:25.876594  407144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:36:25.876694  407144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:36:25.891533  407144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:36:25.892241  407144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 20:36:25.892310  407144 cni.go:84] Creating CNI manager for ""
	I0419 20:36:25.892322  407144 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 20:36:25.892387  407144 start.go:340] cluster config:
	{Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:36:25.892544  407144 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:36:25.895070  407144 out.go:177] * Starting "multinode-151935" primary control-plane node in "multinode-151935" cluster
	I0419 20:36:25.896295  407144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:36:25.896340  407144 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:36:25.896355  407144 cache.go:56] Caching tarball of preloaded images
	I0419 20:36:25.896446  407144 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:36:25.896458  407144 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:36:25.896600  407144 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/config.json ...
	I0419 20:36:25.896836  407144 start.go:360] acquireMachinesLock for multinode-151935: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:36:25.896883  407144 start.go:364] duration metric: took 26.213µs to acquireMachinesLock for "multinode-151935"
	I0419 20:36:25.896903  407144 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:36:25.896914  407144 fix.go:54] fixHost starting: 
	I0419 20:36:25.897209  407144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:36:25.897245  407144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:36:25.912127  407144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0419 20:36:25.912580  407144 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:36:25.913118  407144 main.go:141] libmachine: Using API Version  1
	I0419 20:36:25.913141  407144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:36:25.913515  407144 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:36:25.913707  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.913877  407144 main.go:141] libmachine: (multinode-151935) Calling .GetState
	I0419 20:36:25.915479  407144 fix.go:112] recreateIfNeeded on multinode-151935: state=Running err=<nil>
	W0419 20:36:25.915501  407144 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:36:25.917311  407144 out.go:177] * Updating the running kvm2 "multinode-151935" VM ...
	I0419 20:36:25.918464  407144 machine.go:94] provisionDockerMachine start ...
	I0419 20:36:25.918481  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:36:25.918679  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:25.921051  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:25.921481  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:25.921506  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:25.921634  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:25.921791  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:25.921946  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:25.922106  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:25.922240  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:25.922472  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:25.922484  407144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:36:26.038786  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151935
	
	I0419 20:36:26.038817  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.039120  407144 buildroot.go:166] provisioning hostname "multinode-151935"
	I0419 20:36:26.039149  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.039327  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.041843  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.042250  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.042285  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.042388  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.042536  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.042715  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.042833  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.042993  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.043244  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.043261  407144 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-151935 && echo "multinode-151935" | sudo tee /etc/hostname
	I0419 20:36:26.170014  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151935
	
	I0419 20:36:26.170051  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.172804  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.173123  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.173158  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.173308  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.173531  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.173691  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.173824  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.173938  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.174137  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.174152  407144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-151935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-151935/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-151935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:36:26.294082  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:36:26.294118  407144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:36:26.294141  407144 buildroot.go:174] setting up certificates
	I0419 20:36:26.294152  407144 provision.go:84] configureAuth start
	I0419 20:36:26.294161  407144 main.go:141] libmachine: (multinode-151935) Calling .GetMachineName
	I0419 20:36:26.294517  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:36:26.297591  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.297998  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.298026  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.298206  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.300717  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.301109  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.301142  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.301258  407144 provision.go:143] copyHostCerts
	I0419 20:36:26.301295  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:36:26.301348  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:36:26.301361  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:36:26.301430  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:36:26.301543  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:36:26.301562  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:36:26.301569  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:36:26.301594  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:36:26.301647  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:36:26.301663  407144 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:36:26.301676  407144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:36:26.301698  407144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:36:26.301751  407144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.multinode-151935 san=[127.0.0.1 192.168.39.193 localhost minikube multinode-151935]
	I0419 20:36:26.442167  407144 provision.go:177] copyRemoteCerts
	I0419 20:36:26.442230  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:36:26.442258  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.445479  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.445805  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.445835  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.446085  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.446269  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.446461  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.446647  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:36:26.531216  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0419 20:36:26.531300  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:36:26.560108  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0419 20:36:26.560185  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:36:26.587458  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0419 20:36:26.587537  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0419 20:36:26.614424  407144 provision.go:87] duration metric: took 320.255427ms to configureAuth
	I0419 20:36:26.614462  407144 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:36:26.614769  407144 config.go:182] Loaded profile config "multinode-151935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:36:26.614853  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:36:26.617435  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.617810  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:36:26.617839  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:36:26.618032  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:36:26.618252  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.618405  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:36:26.618546  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:36:26.618724  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:36:26.618895  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:36:26.618911  407144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:37:57.447685  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:37:57.447759  407144 machine.go:97] duration metric: took 1m31.529281382s to provisionDockerMachine
	I0419 20:37:57.447778  407144 start.go:293] postStartSetup for "multinode-151935" (driver="kvm2")
	I0419 20:37:57.447790  407144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:37:57.447817  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.448175  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:37:57.448215  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.451589  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.452114  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.452146  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.452340  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.452562  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.452761  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.452938  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.541370  407144 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:37:57.546155  407144 command_runner.go:130] > NAME=Buildroot
	I0419 20:37:57.546178  407144 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0419 20:37:57.546185  407144 command_runner.go:130] > ID=buildroot
	I0419 20:37:57.546193  407144 command_runner.go:130] > VERSION_ID=2023.02.9
	I0419 20:37:57.546201  407144 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0419 20:37:57.546240  407144 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:37:57.546288  407144 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:37:57.546354  407144 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:37:57.546447  407144 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:37:57.546462  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /etc/ssl/certs/3739982.pem
	I0419 20:37:57.546552  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:37:57.556819  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:37:57.583513  407144 start.go:296] duration metric: took 135.719505ms for postStartSetup
	I0419 20:37:57.583568  407144 fix.go:56] duration metric: took 1m31.686655628s for fixHost
	I0419 20:37:57.583597  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.586777  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.587212  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.587235  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.587396  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.587607  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.587759  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.587878  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.588069  407144 main.go:141] libmachine: Using SSH client type: native
	I0419 20:37:57.588249  407144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0419 20:37:57.588261  407144 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:37:57.697797  407144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713559077.679570718
	
	I0419 20:37:57.697831  407144 fix.go:216] guest clock: 1713559077.679570718
	I0419 20:37:57.697860  407144 fix.go:229] Guest: 2024-04-19 20:37:57.679570718 +0000 UTC Remote: 2024-04-19 20:37:57.583573936 +0000 UTC m=+91.830385825 (delta=95.996782ms)
	I0419 20:37:57.697910  407144 fix.go:200] guest clock delta is within tolerance: 95.996782ms
	I0419 20:37:57.697916  407144 start.go:83] releasing machines lock for "multinode-151935", held for 1m31.801020731s
	I0419 20:37:57.697938  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.698222  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:37:57.700894  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.701286  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.701314  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.701462  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702000  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702210  407144 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:37:57.702278  407144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:37:57.702325  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.702427  407144 ssh_runner.go:195] Run: cat /version.json
	I0419 20:37:57.702452  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:37:57.705273  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705492  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705779  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.705805  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705894  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:57.705923  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:57.705989  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.706253  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.706342  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:37:57.706475  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.706550  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:37:57.706606  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.706744  407144 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:37:57.706881  407144 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:37:57.786184  407144 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0419 20:37:57.786348  407144 ssh_runner.go:195] Run: systemctl --version
	I0419 20:37:57.821378  407144 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0419 20:37:57.821427  407144 command_runner.go:130] > systemd 252 (252)
	I0419 20:37:57.821446  407144 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0419 20:37:57.821519  407144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:37:57.986198  407144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0419 20:37:57.992452  407144 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0419 20:37:57.992520  407144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:37:57.992592  407144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:37:58.003212  407144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:37:58.003243  407144 start.go:494] detecting cgroup driver to use...
	I0419 20:37:58.003307  407144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:37:58.020749  407144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:37:58.036061  407144 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:37:58.036129  407144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:37:58.050866  407144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:37:58.065449  407144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:37:58.220346  407144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:37:58.365779  407144 docker.go:233] disabling docker service ...
	I0419 20:37:58.365907  407144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:37:58.382445  407144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:37:58.397620  407144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:37:58.546387  407144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:37:58.690971  407144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:37:58.705896  407144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:37:58.726346  407144 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0419 20:37:58.726939  407144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:37:58.727008  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.739072  407144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:37:58.739141  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.750596  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.762176  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.773657  407144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:37:58.785295  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.796806  407144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.808986  407144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:37:58.819802  407144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:37:58.829679  407144 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0419 20:37:58.829777  407144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:37:58.839266  407144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:37:58.983818  407144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:37:59.247690  407144 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:37:59.247758  407144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:37:59.252912  407144 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0419 20:37:59.252940  407144 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0419 20:37:59.252950  407144 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I0419 20:37:59.252960  407144 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 20:37:59.252968  407144 command_runner.go:130] > Access: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.252987  407144 command_runner.go:130] > Modify: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.252999  407144 command_runner.go:130] > Change: 2024-04-19 20:37:59.116590545 +0000
	I0419 20:37:59.253004  407144 command_runner.go:130] >  Birth: -
	I0419 20:37:59.253124  407144 start.go:562] Will wait 60s for crictl version
	I0419 20:37:59.253193  407144 ssh_runner.go:195] Run: which crictl
	I0419 20:37:59.257123  407144 command_runner.go:130] > /usr/bin/crictl
	I0419 20:37:59.257203  407144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:37:59.295793  407144 command_runner.go:130] > Version:  0.1.0
	I0419 20:37:59.295815  407144 command_runner.go:130] > RuntimeName:  cri-o
	I0419 20:37:59.295820  407144 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0419 20:37:59.295825  407144 command_runner.go:130] > RuntimeApiVersion:  v1
	I0419 20:37:59.295986  407144 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:37:59.296086  407144 ssh_runner.go:195] Run: crio --version
	I0419 20:37:59.326782  407144 command_runner.go:130] > crio version 1.29.1
	I0419 20:37:59.326814  407144 command_runner.go:130] > Version:        1.29.1
	I0419 20:37:59.326824  407144 command_runner.go:130] > GitCommit:      unknown
	I0419 20:37:59.326835  407144 command_runner.go:130] > GitCommitDate:  unknown
	I0419 20:37:59.326842  407144 command_runner.go:130] > GitTreeState:   clean
	I0419 20:37:59.326851  407144 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0419 20:37:59.326858  407144 command_runner.go:130] > GoVersion:      go1.21.6
	I0419 20:37:59.326865  407144 command_runner.go:130] > Compiler:       gc
	I0419 20:37:59.326873  407144 command_runner.go:130] > Platform:       linux/amd64
	I0419 20:37:59.326879  407144 command_runner.go:130] > Linkmode:       dynamic
	I0419 20:37:59.326904  407144 command_runner.go:130] > BuildTags:      
	I0419 20:37:59.326915  407144 command_runner.go:130] >   containers_image_ostree_stub
	I0419 20:37:59.326923  407144 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0419 20:37:59.326930  407144 command_runner.go:130] >   btrfs_noversion
	I0419 20:37:59.326939  407144 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0419 20:37:59.326945  407144 command_runner.go:130] >   libdm_no_deferred_remove
	I0419 20:37:59.326953  407144 command_runner.go:130] >   seccomp
	I0419 20:37:59.326960  407144 command_runner.go:130] > LDFlags:          unknown
	I0419 20:37:59.326970  407144 command_runner.go:130] > SeccompEnabled:   true
	I0419 20:37:59.326977  407144 command_runner.go:130] > AppArmorEnabled:  false
	I0419 20:37:59.328167  407144 ssh_runner.go:195] Run: crio --version
	I0419 20:37:59.357676  407144 command_runner.go:130] > crio version 1.29.1
	I0419 20:37:59.357703  407144 command_runner.go:130] > Version:        1.29.1
	I0419 20:37:59.357710  407144 command_runner.go:130] > GitCommit:      unknown
	I0419 20:37:59.357714  407144 command_runner.go:130] > GitCommitDate:  unknown
	I0419 20:37:59.357718  407144 command_runner.go:130] > GitTreeState:   clean
	I0419 20:37:59.357724  407144 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0419 20:37:59.357728  407144 command_runner.go:130] > GoVersion:      go1.21.6
	I0419 20:37:59.357732  407144 command_runner.go:130] > Compiler:       gc
	I0419 20:37:59.357736  407144 command_runner.go:130] > Platform:       linux/amd64
	I0419 20:37:59.357741  407144 command_runner.go:130] > Linkmode:       dynamic
	I0419 20:37:59.357745  407144 command_runner.go:130] > BuildTags:      
	I0419 20:37:59.357752  407144 command_runner.go:130] >   containers_image_ostree_stub
	I0419 20:37:59.357759  407144 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0419 20:37:59.357765  407144 command_runner.go:130] >   btrfs_noversion
	I0419 20:37:59.357772  407144 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0419 20:37:59.357779  407144 command_runner.go:130] >   libdm_no_deferred_remove
	I0419 20:37:59.357786  407144 command_runner.go:130] >   seccomp
	I0419 20:37:59.357793  407144 command_runner.go:130] > LDFlags:          unknown
	I0419 20:37:59.357799  407144 command_runner.go:130] > SeccompEnabled:   true
	I0419 20:37:59.357804  407144 command_runner.go:130] > AppArmorEnabled:  false
	I0419 20:37:59.359685  407144 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:37:59.361159  407144 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:37:59.363527  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:59.363964  407144 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:37:59.363995  407144 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:37:59.364172  407144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:37:59.368489  407144 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0419 20:37:59.368619  407144 kubeadm.go:877] updating cluster {Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:37:59.368790  407144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:37:59.368850  407144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:37:59.414453  407144 command_runner.go:130] > {
	I0419 20:37:59.414479  407144 command_runner.go:130] >   "images": [
	I0419 20:37:59.414486  407144 command_runner.go:130] >     {
	I0419 20:37:59.414496  407144 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0419 20:37:59.414504  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414513  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0419 20:37:59.414517  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414524  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414538  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0419 20:37:59.414553  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0419 20:37:59.414559  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414567  407144 command_runner.go:130] >       "size": "65291810",
	I0419 20:37:59.414574  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414580  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414591  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414595  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414599  407144 command_runner.go:130] >     },
	I0419 20:37:59.414604  407144 command_runner.go:130] >     {
	I0419 20:37:59.414613  407144 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0419 20:37:59.414624  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414632  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0419 20:37:59.414642  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414648  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414661  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0419 20:37:59.414671  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0419 20:37:59.414675  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414679  407144 command_runner.go:130] >       "size": "1363676",
	I0419 20:37:59.414683  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414689  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414695  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414699  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414704  407144 command_runner.go:130] >     },
	I0419 20:37:59.414713  407144 command_runner.go:130] >     {
	I0419 20:37:59.414723  407144 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0419 20:37:59.414735  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414747  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0419 20:37:59.414756  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414763  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414775  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0419 20:37:59.414783  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0419 20:37:59.414788  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414796  407144 command_runner.go:130] >       "size": "31470524",
	I0419 20:37:59.414806  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414826  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.414833  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414843  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414850  407144 command_runner.go:130] >     },
	I0419 20:37:59.414858  407144 command_runner.go:130] >     {
	I0419 20:37:59.414866  407144 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0419 20:37:59.414873  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.414881  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0419 20:37:59.414891  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414903  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.414917  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0419 20:37:59.414937  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0419 20:37:59.414946  407144 command_runner.go:130] >       ],
	I0419 20:37:59.414951  407144 command_runner.go:130] >       "size": "61245718",
	I0419 20:37:59.414958  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.414965  407144 command_runner.go:130] >       "username": "nonroot",
	I0419 20:37:59.414974  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.414981  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.414990  407144 command_runner.go:130] >     },
	I0419 20:37:59.414995  407144 command_runner.go:130] >     {
	I0419 20:37:59.415007  407144 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0419 20:37:59.415016  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415094  407144 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0419 20:37:59.415121  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415130  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415151  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0419 20:37:59.415164  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0419 20:37:59.415173  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415188  407144 command_runner.go:130] >       "size": "150779692",
	I0419 20:37:59.415197  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415207  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415218  407144 command_runner.go:130] >       },
	I0419 20:37:59.415226  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415235  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415243  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415247  407144 command_runner.go:130] >     },
	I0419 20:37:59.415255  407144 command_runner.go:130] >     {
	I0419 20:37:59.415269  407144 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0419 20:37:59.415303  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415316  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0419 20:37:59.415321  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415327  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415341  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0419 20:37:59.415357  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0419 20:37:59.415366  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415376  407144 command_runner.go:130] >       "size": "117609952",
	I0419 20:37:59.415385  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415395  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415404  407144 command_runner.go:130] >       },
	I0419 20:37:59.415410  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415417  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415422  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415430  407144 command_runner.go:130] >     },
	I0419 20:37:59.415440  407144 command_runner.go:130] >     {
	I0419 20:37:59.415453  407144 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0419 20:37:59.415465  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415477  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0419 20:37:59.415485  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415495  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415506  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0419 20:37:59.415523  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0419 20:37:59.415533  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415544  407144 command_runner.go:130] >       "size": "112170310",
	I0419 20:37:59.415553  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415564  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415574  407144 command_runner.go:130] >       },
	I0419 20:37:59.415583  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415589  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415595  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415604  407144 command_runner.go:130] >     },
	I0419 20:37:59.415613  407144 command_runner.go:130] >     {
	I0419 20:37:59.415627  407144 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0419 20:37:59.415636  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415648  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0419 20:37:59.415657  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415666  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415689  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0419 20:37:59.415706  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0419 20:37:59.415717  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415727  407144 command_runner.go:130] >       "size": "85932953",
	I0419 20:37:59.415736  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.415746  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415752  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415757  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415760  407144 command_runner.go:130] >     },
	I0419 20:37:59.415763  407144 command_runner.go:130] >     {
	I0419 20:37:59.415776  407144 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0419 20:37:59.415782  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415791  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0419 20:37:59.415801  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415808  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415819  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0419 20:37:59.415832  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0419 20:37:59.415838  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415845  407144 command_runner.go:130] >       "size": "63026502",
	I0419 20:37:59.415850  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.415854  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.415858  407144 command_runner.go:130] >       },
	I0419 20:37:59.415863  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.415869  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.415881  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.415886  407144 command_runner.go:130] >     },
	I0419 20:37:59.415895  407144 command_runner.go:130] >     {
	I0419 20:37:59.415905  407144 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0419 20:37:59.415915  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.415925  407144 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0419 20:37:59.415934  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415941  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.415951  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0419 20:37:59.415965  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0419 20:37:59.415978  407144 command_runner.go:130] >       ],
	I0419 20:37:59.415988  407144 command_runner.go:130] >       "size": "750414",
	I0419 20:37:59.415998  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.416008  407144 command_runner.go:130] >         "value": "65535"
	I0419 20:37:59.416013  407144 command_runner.go:130] >       },
	I0419 20:37:59.416019  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.416064  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.416079  407144 command_runner.go:130] >       "pinned": true
	I0419 20:37:59.416087  407144 command_runner.go:130] >     }
	I0419 20:37:59.416096  407144 command_runner.go:130] >   ]
	I0419 20:37:59.416106  407144 command_runner.go:130] > }
	I0419 20:37:59.416436  407144 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:37:59.416452  407144 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:37:59.416514  407144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:37:59.453644  407144 command_runner.go:130] > {
	I0419 20:37:59.453675  407144 command_runner.go:130] >   "images": [
	I0419 20:37:59.453680  407144 command_runner.go:130] >     {
	I0419 20:37:59.453689  407144 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0419 20:37:59.453693  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453699  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0419 20:37:59.453702  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453707  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453717  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0419 20:37:59.453725  407144 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0419 20:37:59.453735  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453742  407144 command_runner.go:130] >       "size": "65291810",
	I0419 20:37:59.453750  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.453756  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.453787  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.453795  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.453799  407144 command_runner.go:130] >     },
	I0419 20:37:59.453802  407144 command_runner.go:130] >     {
	I0419 20:37:59.453808  407144 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0419 20:37:59.453814  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453823  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0419 20:37:59.453829  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453836  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453849  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0419 20:37:59.453863  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0419 20:37:59.453869  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453879  407144 command_runner.go:130] >       "size": "1363676",
	I0419 20:37:59.453883  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.453891  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.453896  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.453903  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.453912  407144 command_runner.go:130] >     },
	I0419 20:37:59.453918  407144 command_runner.go:130] >     {
	I0419 20:37:59.453927  407144 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0419 20:37:59.453937  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.453946  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0419 20:37:59.453952  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453959  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.453972  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0419 20:37:59.453982  407144 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0419 20:37:59.453987  407144 command_runner.go:130] >       ],
	I0419 20:37:59.453995  407144 command_runner.go:130] >       "size": "31470524",
	I0419 20:37:59.454002  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454012  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454019  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454026  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454034  407144 command_runner.go:130] >     },
	I0419 20:37:59.454040  407144 command_runner.go:130] >     {
	I0419 20:37:59.454053  407144 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0419 20:37:59.454059  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454067  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0419 20:37:59.454072  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454081  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454094  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0419 20:37:59.454114  407144 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0419 20:37:59.454124  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454130  407144 command_runner.go:130] >       "size": "61245718",
	I0419 20:37:59.454139  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454146  407144 command_runner.go:130] >       "username": "nonroot",
	I0419 20:37:59.454156  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454164  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454173  407144 command_runner.go:130] >     },
	I0419 20:37:59.454179  407144 command_runner.go:130] >     {
	I0419 20:37:59.454193  407144 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0419 20:37:59.454199  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454210  407144 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0419 20:37:59.454218  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454224  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454236  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0419 20:37:59.454248  407144 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0419 20:37:59.454257  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454264  407144 command_runner.go:130] >       "size": "150779692",
	I0419 20:37:59.454284  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454291  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454300  407144 command_runner.go:130] >       },
	I0419 20:37:59.454307  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454316  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454321  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454324  407144 command_runner.go:130] >     },
	I0419 20:37:59.454328  407144 command_runner.go:130] >     {
	I0419 20:37:59.454337  407144 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0419 20:37:59.454348  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454358  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0419 20:37:59.454367  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454374  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454389  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0419 20:37:59.454404  407144 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0419 20:37:59.454411  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454415  407144 command_runner.go:130] >       "size": "117609952",
	I0419 20:37:59.454424  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454430  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454439  407144 command_runner.go:130] >       },
	I0419 20:37:59.454445  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454455  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454461  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454470  407144 command_runner.go:130] >     },
	I0419 20:37:59.454475  407144 command_runner.go:130] >     {
	I0419 20:37:59.454488  407144 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0419 20:37:59.454496  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454502  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0419 20:37:59.454511  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454518  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454535  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0419 20:37:59.454551  407144 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0419 20:37:59.454564  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454574  407144 command_runner.go:130] >       "size": "112170310",
	I0419 20:37:59.454581  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454585  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454589  407144 command_runner.go:130] >       },
	I0419 20:37:59.454596  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454605  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454612  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454620  407144 command_runner.go:130] >     },
	I0419 20:37:59.454626  407144 command_runner.go:130] >     {
	I0419 20:37:59.454638  407144 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0419 20:37:59.454645  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454656  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0419 20:37:59.454663  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454669  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454690  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0419 20:37:59.454706  407144 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0419 20:37:59.454715  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454722  407144 command_runner.go:130] >       "size": "85932953",
	I0419 20:37:59.454733  407144 command_runner.go:130] >       "uid": null,
	I0419 20:37:59.454740  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454750  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454756  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454764  407144 command_runner.go:130] >     },
	I0419 20:37:59.454774  407144 command_runner.go:130] >     {
	I0419 20:37:59.454788  407144 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0419 20:37:59.454798  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454807  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0419 20:37:59.454815  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454822  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454836  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0419 20:37:59.454847  407144 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0419 20:37:59.454856  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454863  407144 command_runner.go:130] >       "size": "63026502",
	I0419 20:37:59.454872  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.454879  407144 command_runner.go:130] >         "value": "0"
	I0419 20:37:59.454888  407144 command_runner.go:130] >       },
	I0419 20:37:59.454895  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.454901  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.454911  407144 command_runner.go:130] >       "pinned": false
	I0419 20:37:59.454917  407144 command_runner.go:130] >     },
	I0419 20:37:59.454930  407144 command_runner.go:130] >     {
	I0419 20:37:59.454943  407144 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0419 20:37:59.454953  407144 command_runner.go:130] >       "repoTags": [
	I0419 20:37:59.454961  407144 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0419 20:37:59.454969  407144 command_runner.go:130] >       ],
	I0419 20:37:59.454976  407144 command_runner.go:130] >       "repoDigests": [
	I0419 20:37:59.454990  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0419 20:37:59.455007  407144 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0419 20:37:59.455014  407144 command_runner.go:130] >       ],
	I0419 20:37:59.455019  407144 command_runner.go:130] >       "size": "750414",
	I0419 20:37:59.455025  407144 command_runner.go:130] >       "uid": {
	I0419 20:37:59.455035  407144 command_runner.go:130] >         "value": "65535"
	I0419 20:37:59.455043  407144 command_runner.go:130] >       },
	I0419 20:37:59.455049  407144 command_runner.go:130] >       "username": "",
	I0419 20:37:59.455058  407144 command_runner.go:130] >       "spec": null,
	I0419 20:37:59.455063  407144 command_runner.go:130] >       "pinned": true
	I0419 20:37:59.455068  407144 command_runner.go:130] >     }
	I0419 20:37:59.455072  407144 command_runner.go:130] >   ]
	I0419 20:37:59.455079  407144 command_runner.go:130] > }
	I0419 20:37:59.455323  407144 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:37:59.455342  407144 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:37:59.455350  407144 kubeadm.go:928] updating node { 192.168.39.193 8443 v1.30.0 crio true true} ...
	I0419 20:37:59.455461  407144 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-151935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:37:59.455530  407144 ssh_runner.go:195] Run: crio config
	I0419 20:37:59.488235  407144 command_runner.go:130] ! time="2024-04-19 20:37:59.470103506Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0419 20:37:59.494800  407144 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0419 20:37:59.507846  407144 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0419 20:37:59.507870  407144 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0419 20:37:59.507876  407144 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0419 20:37:59.507880  407144 command_runner.go:130] > #
	I0419 20:37:59.507886  407144 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0419 20:37:59.507897  407144 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0419 20:37:59.507903  407144 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0419 20:37:59.507913  407144 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0419 20:37:59.507920  407144 command_runner.go:130] > # reload'.
	I0419 20:37:59.507926  407144 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0419 20:37:59.507931  407144 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0419 20:37:59.507937  407144 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0419 20:37:59.507943  407144 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0419 20:37:59.507950  407144 command_runner.go:130] > [crio]
	I0419 20:37:59.507956  407144 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0419 20:37:59.507963  407144 command_runner.go:130] > # containers images, in this directory.
	I0419 20:37:59.507968  407144 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0419 20:37:59.507980  407144 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0419 20:37:59.507987  407144 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0419 20:37:59.507995  407144 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0419 20:37:59.508001  407144 command_runner.go:130] > # imagestore = ""
	I0419 20:37:59.508007  407144 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0419 20:37:59.508016  407144 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0419 20:37:59.508020  407144 command_runner.go:130] > storage_driver = "overlay"
	I0419 20:37:59.508025  407144 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0419 20:37:59.508033  407144 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0419 20:37:59.508039  407144 command_runner.go:130] > storage_option = [
	I0419 20:37:59.508044  407144 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0419 20:37:59.508050  407144 command_runner.go:130] > ]
	I0419 20:37:59.508057  407144 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0419 20:37:59.508065  407144 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0419 20:37:59.508070  407144 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0419 20:37:59.508075  407144 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0419 20:37:59.508083  407144 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0419 20:37:59.508091  407144 command_runner.go:130] > # always happen on a node reboot
	I0419 20:37:59.508095  407144 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0419 20:37:59.508106  407144 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0419 20:37:59.508114  407144 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0419 20:37:59.508121  407144 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0419 20:37:59.508126  407144 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0419 20:37:59.508135  407144 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0419 20:37:59.508150  407144 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0419 20:37:59.508156  407144 command_runner.go:130] > # internal_wipe = true
	I0419 20:37:59.508163  407144 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0419 20:37:59.508171  407144 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0419 20:37:59.508175  407144 command_runner.go:130] > # internal_repair = false
	I0419 20:37:59.508180  407144 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0419 20:37:59.508188  407144 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0419 20:37:59.508196  407144 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0419 20:37:59.508202  407144 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0419 20:37:59.508212  407144 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0419 20:37:59.508218  407144 command_runner.go:130] > [crio.api]
	I0419 20:37:59.508224  407144 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0419 20:37:59.508230  407144 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0419 20:37:59.508235  407144 command_runner.go:130] > # IP address on which the stream server will listen.
	I0419 20:37:59.508241  407144 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0419 20:37:59.508248  407144 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0419 20:37:59.508255  407144 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0419 20:37:59.508259  407144 command_runner.go:130] > # stream_port = "0"
	I0419 20:37:59.508267  407144 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0419 20:37:59.508271  407144 command_runner.go:130] > # stream_enable_tls = false
	I0419 20:37:59.508279  407144 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0419 20:37:59.508283  407144 command_runner.go:130] > # stream_idle_timeout = ""
	I0419 20:37:59.508292  407144 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0419 20:37:59.508301  407144 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0419 20:37:59.508307  407144 command_runner.go:130] > # minutes.
	I0419 20:37:59.508311  407144 command_runner.go:130] > # stream_tls_cert = ""
	I0419 20:37:59.508319  407144 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0419 20:37:59.508327  407144 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0419 20:37:59.508331  407144 command_runner.go:130] > # stream_tls_key = ""
	I0419 20:37:59.508339  407144 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0419 20:37:59.508345  407144 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0419 20:37:59.508367  407144 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0419 20:37:59.508375  407144 command_runner.go:130] > # stream_tls_ca = ""
	I0419 20:37:59.508382  407144 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0419 20:37:59.508386  407144 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0419 20:37:59.508395  407144 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0419 20:37:59.508412  407144 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0419 20:37:59.508421  407144 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0419 20:37:59.508428  407144 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0419 20:37:59.508432  407144 command_runner.go:130] > [crio.runtime]
	I0419 20:37:59.508438  407144 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0419 20:37:59.508446  407144 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0419 20:37:59.508450  407144 command_runner.go:130] > # "nofile=1024:2048"
	I0419 20:37:59.508457  407144 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0419 20:37:59.508463  407144 command_runner.go:130] > # default_ulimits = [
	I0419 20:37:59.508466  407144 command_runner.go:130] > # ]
	I0419 20:37:59.508473  407144 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0419 20:37:59.508479  407144 command_runner.go:130] > # no_pivot = false
	I0419 20:37:59.508486  407144 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0419 20:37:59.508496  407144 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0419 20:37:59.508503  407144 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0419 20:37:59.508508  407144 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0419 20:37:59.508515  407144 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0419 20:37:59.508522  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0419 20:37:59.508528  407144 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0419 20:37:59.508532  407144 command_runner.go:130] > # Cgroup setting for conmon
	I0419 20:37:59.508541  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0419 20:37:59.508551  407144 command_runner.go:130] > conmon_cgroup = "pod"
	I0419 20:37:59.508559  407144 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0419 20:37:59.508564  407144 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0419 20:37:59.508573  407144 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0419 20:37:59.508577  407144 command_runner.go:130] > conmon_env = [
	I0419 20:37:59.508583  407144 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0419 20:37:59.508588  407144 command_runner.go:130] > ]
	I0419 20:37:59.508594  407144 command_runner.go:130] > # Additional environment variables to set for all the
	I0419 20:37:59.508601  407144 command_runner.go:130] > # containers. These are overridden if set in the
	I0419 20:37:59.508606  407144 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0419 20:37:59.508613  407144 command_runner.go:130] > # default_env = [
	I0419 20:37:59.508617  407144 command_runner.go:130] > # ]
	I0419 20:37:59.508625  407144 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0419 20:37:59.508650  407144 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0419 20:37:59.508660  407144 command_runner.go:130] > # selinux = false
	I0419 20:37:59.508673  407144 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0419 20:37:59.508682  407144 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0419 20:37:59.508690  407144 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0419 20:37:59.508696  407144 command_runner.go:130] > # seccomp_profile = ""
	I0419 20:37:59.508702  407144 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0419 20:37:59.508710  407144 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0419 20:37:59.508716  407144 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0419 20:37:59.508723  407144 command_runner.go:130] > # which might increase security.
	I0419 20:37:59.508728  407144 command_runner.go:130] > # This option is currently deprecated,
	I0419 20:37:59.508736  407144 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0419 20:37:59.508740  407144 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0419 20:37:59.508749  407144 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0419 20:37:59.508755  407144 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0419 20:37:59.508765  407144 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0419 20:37:59.508771  407144 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0419 20:37:59.508778  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.508783  407144 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0419 20:37:59.508791  407144 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0419 20:37:59.508795  407144 command_runner.go:130] > # the cgroup blockio controller.
	I0419 20:37:59.508802  407144 command_runner.go:130] > # blockio_config_file = ""
	I0419 20:37:59.508808  407144 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0419 20:37:59.508814  407144 command_runner.go:130] > # blockio parameters.
	I0419 20:37:59.508818  407144 command_runner.go:130] > # blockio_reload = false
	I0419 20:37:59.508823  407144 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0419 20:37:59.508830  407144 command_runner.go:130] > # irqbalance daemon.
	I0419 20:37:59.508835  407144 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0419 20:37:59.508843  407144 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0419 20:37:59.508849  407144 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0419 20:37:59.508858  407144 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0419 20:37:59.508866  407144 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0419 20:37:59.508872  407144 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0419 20:37:59.508880  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.508884  407144 command_runner.go:130] > # rdt_config_file = ""
	I0419 20:37:59.508892  407144 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0419 20:37:59.508896  407144 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0419 20:37:59.508921  407144 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0419 20:37:59.508931  407144 command_runner.go:130] > # separate_pull_cgroup = ""
	I0419 20:37:59.508939  407144 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0419 20:37:59.508945  407144 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0419 20:37:59.508952  407144 command_runner.go:130] > # will be added.
	I0419 20:37:59.508955  407144 command_runner.go:130] > # default_capabilities = [
	I0419 20:37:59.508961  407144 command_runner.go:130] > # 	"CHOWN",
	I0419 20:37:59.508965  407144 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0419 20:37:59.508971  407144 command_runner.go:130] > # 	"FSETID",
	I0419 20:37:59.508974  407144 command_runner.go:130] > # 	"FOWNER",
	I0419 20:37:59.508980  407144 command_runner.go:130] > # 	"SETGID",
	I0419 20:37:59.508987  407144 command_runner.go:130] > # 	"SETUID",
	I0419 20:37:59.508993  407144 command_runner.go:130] > # 	"SETPCAP",
	I0419 20:37:59.508997  407144 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0419 20:37:59.509003  407144 command_runner.go:130] > # 	"KILL",
	I0419 20:37:59.509007  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509014  407144 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0419 20:37:59.509023  407144 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0419 20:37:59.509030  407144 command_runner.go:130] > # add_inheritable_capabilities = false
	I0419 20:37:59.509038  407144 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0419 20:37:59.509045  407144 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0419 20:37:59.509050  407144 command_runner.go:130] > default_sysctls = [
	I0419 20:37:59.509055  407144 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0419 20:37:59.509060  407144 command_runner.go:130] > ]
	I0419 20:37:59.509067  407144 command_runner.go:130] > # List of devices on the host that a
	I0419 20:37:59.509075  407144 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0419 20:37:59.509082  407144 command_runner.go:130] > # allowed_devices = [
	I0419 20:37:59.509086  407144 command_runner.go:130] > # 	"/dev/fuse",
	I0419 20:37:59.509092  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509097  407144 command_runner.go:130] > # List of additional devices. specified as
	I0419 20:37:59.509106  407144 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0419 20:37:59.509113  407144 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0419 20:37:59.509121  407144 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0419 20:37:59.509126  407144 command_runner.go:130] > # additional_devices = [
	I0419 20:37:59.509130  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509137  407144 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0419 20:37:59.509141  407144 command_runner.go:130] > # cdi_spec_dirs = [
	I0419 20:37:59.509147  407144 command_runner.go:130] > # 	"/etc/cdi",
	I0419 20:37:59.509151  407144 command_runner.go:130] > # 	"/var/run/cdi",
	I0419 20:37:59.509155  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509163  407144 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0419 20:37:59.509171  407144 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0419 20:37:59.509176  407144 command_runner.go:130] > # Defaults to false.
	I0419 20:37:59.509181  407144 command_runner.go:130] > # device_ownership_from_security_context = false
	I0419 20:37:59.509189  407144 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0419 20:37:59.509195  407144 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0419 20:37:59.509201  407144 command_runner.go:130] > # hooks_dir = [
	I0419 20:37:59.509205  407144 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0419 20:37:59.509211  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509217  407144 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0419 20:37:59.509226  407144 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0419 20:37:59.509233  407144 command_runner.go:130] > # its default mounts from the following two files:
	I0419 20:37:59.509236  407144 command_runner.go:130] > #
	I0419 20:37:59.509242  407144 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0419 20:37:59.509250  407144 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0419 20:37:59.509258  407144 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0419 20:37:59.509264  407144 command_runner.go:130] > #
	I0419 20:37:59.509269  407144 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0419 20:37:59.509277  407144 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0419 20:37:59.509283  407144 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0419 20:37:59.509293  407144 command_runner.go:130] > #      only add mounts it finds in this file.
	I0419 20:37:59.509299  407144 command_runner.go:130] > #
	I0419 20:37:59.509303  407144 command_runner.go:130] > # default_mounts_file = ""
	I0419 20:37:59.509310  407144 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0419 20:37:59.509317  407144 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0419 20:37:59.509320  407144 command_runner.go:130] > pids_limit = 1024
	I0419 20:37:59.509329  407144 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0419 20:37:59.509337  407144 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0419 20:37:59.509346  407144 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0419 20:37:59.509355  407144 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0419 20:37:59.509361  407144 command_runner.go:130] > # log_size_max = -1
	I0419 20:37:59.509368  407144 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0419 20:37:59.509374  407144 command_runner.go:130] > # log_to_journald = false
	I0419 20:37:59.509381  407144 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0419 20:37:59.509389  407144 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0419 20:37:59.509396  407144 command_runner.go:130] > # Path to directory for container attach sockets.
	I0419 20:37:59.509402  407144 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0419 20:37:59.509413  407144 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0419 20:37:59.509419  407144 command_runner.go:130] > # bind_mount_prefix = ""
	I0419 20:37:59.509424  407144 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0419 20:37:59.509430  407144 command_runner.go:130] > # read_only = false
	I0419 20:37:59.509436  407144 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0419 20:37:59.509444  407144 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0419 20:37:59.509449  407144 command_runner.go:130] > # live configuration reload.
	I0419 20:37:59.509453  407144 command_runner.go:130] > # log_level = "info"
	I0419 20:37:59.509461  407144 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0419 20:37:59.509469  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.509473  407144 command_runner.go:130] > # log_filter = ""
	I0419 20:37:59.509481  407144 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0419 20:37:59.509490  407144 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0419 20:37:59.509496  407144 command_runner.go:130] > # separated by comma.
	I0419 20:37:59.509504  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509510  407144 command_runner.go:130] > # uid_mappings = ""
	I0419 20:37:59.509515  407144 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0419 20:37:59.509523  407144 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0419 20:37:59.509527  407144 command_runner.go:130] > # separated by comma.
	I0419 20:37:59.509536  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509544  407144 command_runner.go:130] > # gid_mappings = ""
	I0419 20:37:59.509553  407144 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0419 20:37:59.509561  407144 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0419 20:37:59.509568  407144 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0419 20:37:59.509577  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509583  407144 command_runner.go:130] > # minimum_mappable_uid = -1
	I0419 20:37:59.509589  407144 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0419 20:37:59.509598  407144 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0419 20:37:59.509606  407144 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0419 20:37:59.509613  407144 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0419 20:37:59.509620  407144 command_runner.go:130] > # minimum_mappable_gid = -1
	I0419 20:37:59.509626  407144 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0419 20:37:59.509635  407144 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0419 20:37:59.509642  407144 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0419 20:37:59.509650  407144 command_runner.go:130] > # ctr_stop_timeout = 30
	I0419 20:37:59.509655  407144 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0419 20:37:59.509663  407144 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0419 20:37:59.509669  407144 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0419 20:37:59.509676  407144 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0419 20:37:59.509680  407144 command_runner.go:130] > drop_infra_ctr = false
	I0419 20:37:59.509688  407144 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0419 20:37:59.509693  407144 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0419 20:37:59.509702  407144 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0419 20:37:59.509709  407144 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0419 20:37:59.509715  407144 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0419 20:37:59.509723  407144 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0419 20:37:59.509731  407144 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0419 20:37:59.509736  407144 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0419 20:37:59.509741  407144 command_runner.go:130] > # shared_cpuset = ""
	I0419 20:37:59.509747  407144 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0419 20:37:59.509754  407144 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0419 20:37:59.509758  407144 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0419 20:37:59.509765  407144 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0419 20:37:59.509771  407144 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0419 20:37:59.509776  407144 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0419 20:37:59.509787  407144 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0419 20:37:59.509791  407144 command_runner.go:130] > # enable_criu_support = false
	I0419 20:37:59.509796  407144 command_runner.go:130] > # Enable/disable the generation of the container,
	I0419 20:37:59.509802  407144 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0419 20:37:59.509806  407144 command_runner.go:130] > # enable_pod_events = false
	I0419 20:37:59.509812  407144 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0419 20:37:59.509821  407144 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0419 20:37:59.509825  407144 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0419 20:37:59.509832  407144 command_runner.go:130] > # default_runtime = "runc"
	I0419 20:37:59.509837  407144 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0419 20:37:59.509846  407144 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0419 20:37:59.509857  407144 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0419 20:37:59.509864  407144 command_runner.go:130] > # creation as a file is not desired either.
	I0419 20:37:59.509873  407144 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0419 20:37:59.509880  407144 command_runner.go:130] > # the hostname is being managed dynamically.
	I0419 20:37:59.509885  407144 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0419 20:37:59.509891  407144 command_runner.go:130] > # ]
	I0419 20:37:59.509896  407144 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0419 20:37:59.509905  407144 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0419 20:37:59.509912  407144 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0419 20:37:59.509920  407144 command_runner.go:130] > # Each entry in the table should follow the format:
	I0419 20:37:59.509923  407144 command_runner.go:130] > #
	I0419 20:37:59.509927  407144 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0419 20:37:59.509934  407144 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0419 20:37:59.509974  407144 command_runner.go:130] > # runtime_type = "oci"
	I0419 20:37:59.509982  407144 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0419 20:37:59.509987  407144 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0419 20:37:59.509991  407144 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0419 20:37:59.509995  407144 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0419 20:37:59.509999  407144 command_runner.go:130] > # monitor_env = []
	I0419 20:37:59.510006  407144 command_runner.go:130] > # privileged_without_host_devices = false
	I0419 20:37:59.510013  407144 command_runner.go:130] > # allowed_annotations = []
	I0419 20:37:59.510018  407144 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0419 20:37:59.510024  407144 command_runner.go:130] > # Where:
	I0419 20:37:59.510029  407144 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0419 20:37:59.510038  407144 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0419 20:37:59.510044  407144 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0419 20:37:59.510052  407144 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0419 20:37:59.510059  407144 command_runner.go:130] > #   in $PATH.
	I0419 20:37:59.510069  407144 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0419 20:37:59.510076  407144 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0419 20:37:59.510082  407144 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0419 20:37:59.510088  407144 command_runner.go:130] > #   state.
	I0419 20:37:59.510094  407144 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0419 20:37:59.510102  407144 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0419 20:37:59.510110  407144 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0419 20:37:59.510119  407144 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0419 20:37:59.510126  407144 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0419 20:37:59.510134  407144 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0419 20:37:59.510146  407144 command_runner.go:130] > #   The currently recognized values are:
	I0419 20:37:59.510155  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0419 20:37:59.510164  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0419 20:37:59.510170  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0419 20:37:59.510178  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0419 20:37:59.510187  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0419 20:37:59.510196  407144 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0419 20:37:59.510204  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0419 20:37:59.510212  407144 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0419 20:37:59.510220  407144 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0419 20:37:59.510229  407144 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0419 20:37:59.510233  407144 command_runner.go:130] > #   deprecated option "conmon".
	I0419 20:37:59.510242  407144 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0419 20:37:59.510248  407144 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0419 20:37:59.510254  407144 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0419 20:37:59.510262  407144 command_runner.go:130] > #   should be moved to the container's cgroup
	I0419 20:37:59.510268  407144 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0419 20:37:59.510275  407144 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0419 20:37:59.510282  407144 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0419 20:37:59.510289  407144 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0419 20:37:59.510292  407144 command_runner.go:130] > #
	I0419 20:37:59.510304  407144 command_runner.go:130] > # Using the seccomp notifier feature:
	I0419 20:37:59.510312  407144 command_runner.go:130] > #
	I0419 20:37:59.510320  407144 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0419 20:37:59.510329  407144 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0419 20:37:59.510334  407144 command_runner.go:130] > #
	I0419 20:37:59.510340  407144 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0419 20:37:59.510348  407144 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0419 20:37:59.510354  407144 command_runner.go:130] > #
	I0419 20:37:59.510360  407144 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0419 20:37:59.510365  407144 command_runner.go:130] > # feature.
	I0419 20:37:59.510374  407144 command_runner.go:130] > #
	I0419 20:37:59.510382  407144 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0419 20:37:59.510389  407144 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0419 20:37:59.510397  407144 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0419 20:37:59.510409  407144 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0419 20:37:59.510421  407144 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0419 20:37:59.510426  407144 command_runner.go:130] > #
	I0419 20:37:59.510433  407144 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0419 20:37:59.510440  407144 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0419 20:37:59.510444  407144 command_runner.go:130] > #
	I0419 20:37:59.510450  407144 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0419 20:37:59.510458  407144 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0419 20:37:59.510464  407144 command_runner.go:130] > #
	I0419 20:37:59.510470  407144 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0419 20:37:59.510478  407144 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0419 20:37:59.510482  407144 command_runner.go:130] > # limitation.
	I0419 20:37:59.510488  407144 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0419 20:37:59.510493  407144 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0419 20:37:59.510497  407144 command_runner.go:130] > runtime_type = "oci"
	I0419 20:37:59.510503  407144 command_runner.go:130] > runtime_root = "/run/runc"
	I0419 20:37:59.510508  407144 command_runner.go:130] > runtime_config_path = ""
	I0419 20:37:59.510515  407144 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0419 20:37:59.510522  407144 command_runner.go:130] > monitor_cgroup = "pod"
	I0419 20:37:59.510526  407144 command_runner.go:130] > monitor_exec_cgroup = ""
	I0419 20:37:59.510531  407144 command_runner.go:130] > monitor_env = [
	I0419 20:37:59.510537  407144 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0419 20:37:59.510542  407144 command_runner.go:130] > ]
	I0419 20:37:59.510547  407144 command_runner.go:130] > privileged_without_host_devices = false
	I0419 20:37:59.510555  407144 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0419 20:37:59.510563  407144 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0419 20:37:59.510569  407144 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0419 20:37:59.510576  407144 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0419 20:37:59.510588  407144 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0419 20:37:59.510596  407144 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0419 20:37:59.510607  407144 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0419 20:37:59.510617  407144 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0419 20:37:59.510624  407144 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0419 20:37:59.510634  407144 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0419 20:37:59.510640  407144 command_runner.go:130] > # Example:
	I0419 20:37:59.510644  407144 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0419 20:37:59.510652  407144 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0419 20:37:59.510661  407144 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0419 20:37:59.510669  407144 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0419 20:37:59.510675  407144 command_runner.go:130] > # cpuset = 0
	I0419 20:37:59.510679  407144 command_runner.go:130] > # cpushares = "0-1"
	I0419 20:37:59.510685  407144 command_runner.go:130] > # Where:
	I0419 20:37:59.510690  407144 command_runner.go:130] > # The workload name is workload-type.
	I0419 20:37:59.510699  407144 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0419 20:37:59.510706  407144 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0419 20:37:59.510714  407144 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0419 20:37:59.510721  407144 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0419 20:37:59.510729  407144 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0419 20:37:59.510734  407144 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0419 20:37:59.510740  407144 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0419 20:37:59.510747  407144 command_runner.go:130] > # Default value is set to true
	I0419 20:37:59.510751  407144 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0419 20:37:59.510757  407144 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0419 20:37:59.510762  407144 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0419 20:37:59.510766  407144 command_runner.go:130] > # Default value is set to 'false'
	I0419 20:37:59.510773  407144 command_runner.go:130] > # disable_hostport_mapping = false
	I0419 20:37:59.510779  407144 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0419 20:37:59.510782  407144 command_runner.go:130] > #
	I0419 20:37:59.510788  407144 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0419 20:37:59.510793  407144 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0419 20:37:59.510799  407144 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0419 20:37:59.510805  407144 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0419 20:37:59.510812  407144 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0419 20:37:59.510816  407144 command_runner.go:130] > [crio.image]
	I0419 20:37:59.510821  407144 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0419 20:37:59.510825  407144 command_runner.go:130] > # default_transport = "docker://"
	I0419 20:37:59.510831  407144 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0419 20:37:59.510837  407144 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0419 20:37:59.510840  407144 command_runner.go:130] > # global_auth_file = ""
	I0419 20:37:59.510845  407144 command_runner.go:130] > # The image used to instantiate infra containers.
	I0419 20:37:59.510850  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.510854  407144 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0419 20:37:59.510860  407144 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0419 20:37:59.510869  407144 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0419 20:37:59.510873  407144 command_runner.go:130] > # This option supports live configuration reload.
	I0419 20:37:59.510877  407144 command_runner.go:130] > # pause_image_auth_file = ""
	I0419 20:37:59.510882  407144 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0419 20:37:59.510888  407144 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0419 20:37:59.510893  407144 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0419 20:37:59.510899  407144 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0419 20:37:59.510902  407144 command_runner.go:130] > # pause_command = "/pause"
	I0419 20:37:59.510908  407144 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0419 20:37:59.510913  407144 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0419 20:37:59.510918  407144 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0419 20:37:59.510924  407144 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0419 20:37:59.510929  407144 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0419 20:37:59.510935  407144 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0419 20:37:59.510938  407144 command_runner.go:130] > # pinned_images = [
	I0419 20:37:59.510941  407144 command_runner.go:130] > # ]
	I0419 20:37:59.510946  407144 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0419 20:37:59.510952  407144 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0419 20:37:59.510958  407144 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0419 20:37:59.510964  407144 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0419 20:37:59.510968  407144 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0419 20:37:59.510972  407144 command_runner.go:130] > # signature_policy = ""
	I0419 20:37:59.510977  407144 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0419 20:37:59.510983  407144 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0419 20:37:59.510991  407144 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0419 20:37:59.511003  407144 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0419 20:37:59.511010  407144 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0419 20:37:59.511015  407144 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0419 20:37:59.511023  407144 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0419 20:37:59.511031  407144 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0419 20:37:59.511037  407144 command_runner.go:130] > # changing them here.
	I0419 20:37:59.511041  407144 command_runner.go:130] > # insecure_registries = [
	I0419 20:37:59.511046  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511053  407144 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0419 20:37:59.511060  407144 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0419 20:37:59.511064  407144 command_runner.go:130] > # image_volumes = "mkdir"
	I0419 20:37:59.511073  407144 command_runner.go:130] > # Temporary directory to use for storing big files
	I0419 20:37:59.511080  407144 command_runner.go:130] > # big_files_temporary_dir = ""
	I0419 20:37:59.511086  407144 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0419 20:37:59.511092  407144 command_runner.go:130] > # CNI plugins.
	I0419 20:37:59.511095  407144 command_runner.go:130] > [crio.network]
	I0419 20:37:59.511103  407144 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0419 20:37:59.511109  407144 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0419 20:37:59.511116  407144 command_runner.go:130] > # cni_default_network = ""
	I0419 20:37:59.511122  407144 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0419 20:37:59.511128  407144 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0419 20:37:59.511134  407144 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0419 20:37:59.511139  407144 command_runner.go:130] > # plugin_dirs = [
	I0419 20:37:59.511143  407144 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0419 20:37:59.511146  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511152  407144 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0419 20:37:59.511158  407144 command_runner.go:130] > [crio.metrics]
	I0419 20:37:59.511163  407144 command_runner.go:130] > # Globally enable or disable metrics support.
	I0419 20:37:59.511169  407144 command_runner.go:130] > enable_metrics = true
	I0419 20:37:59.511174  407144 command_runner.go:130] > # Specify enabled metrics collectors.
	I0419 20:37:59.511180  407144 command_runner.go:130] > # Per default all metrics are enabled.
	I0419 20:37:59.511186  407144 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0419 20:37:59.511194  407144 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0419 20:37:59.511202  407144 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0419 20:37:59.511208  407144 command_runner.go:130] > # metrics_collectors = [
	I0419 20:37:59.511212  407144 command_runner.go:130] > # 	"operations",
	I0419 20:37:59.511219  407144 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0419 20:37:59.511223  407144 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0419 20:37:59.511230  407144 command_runner.go:130] > # 	"operations_errors",
	I0419 20:37:59.511234  407144 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0419 20:37:59.511240  407144 command_runner.go:130] > # 	"image_pulls_by_name",
	I0419 20:37:59.511245  407144 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0419 20:37:59.511254  407144 command_runner.go:130] > # 	"image_pulls_failures",
	I0419 20:37:59.511260  407144 command_runner.go:130] > # 	"image_pulls_successes",
	I0419 20:37:59.511264  407144 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0419 20:37:59.511271  407144 command_runner.go:130] > # 	"image_layer_reuse",
	I0419 20:37:59.511276  407144 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0419 20:37:59.511286  407144 command_runner.go:130] > # 	"containers_oom_total",
	I0419 20:37:59.511293  407144 command_runner.go:130] > # 	"containers_oom",
	I0419 20:37:59.511297  407144 command_runner.go:130] > # 	"processes_defunct",
	I0419 20:37:59.511303  407144 command_runner.go:130] > # 	"operations_total",
	I0419 20:37:59.511307  407144 command_runner.go:130] > # 	"operations_latency_seconds",
	I0419 20:37:59.511311  407144 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0419 20:37:59.511318  407144 command_runner.go:130] > # 	"operations_errors_total",
	I0419 20:37:59.511322  407144 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0419 20:37:59.511329  407144 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0419 20:37:59.511333  407144 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0419 20:37:59.511338  407144 command_runner.go:130] > # 	"image_pulls_success_total",
	I0419 20:37:59.511342  407144 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0419 20:37:59.511349  407144 command_runner.go:130] > # 	"containers_oom_count_total",
	I0419 20:37:59.511353  407144 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0419 20:37:59.511359  407144 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0419 20:37:59.511363  407144 command_runner.go:130] > # ]
	I0419 20:37:59.511370  407144 command_runner.go:130] > # The port on which the metrics server will listen.
	I0419 20:37:59.511374  407144 command_runner.go:130] > # metrics_port = 9090
	I0419 20:37:59.511379  407144 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0419 20:37:59.511386  407144 command_runner.go:130] > # metrics_socket = ""
	I0419 20:37:59.511391  407144 command_runner.go:130] > # The certificate for the secure metrics server.
	I0419 20:37:59.511399  407144 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0419 20:37:59.511410  407144 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0419 20:37:59.511417  407144 command_runner.go:130] > # certificate on any modification event.
	I0419 20:37:59.511421  407144 command_runner.go:130] > # metrics_cert = ""
	I0419 20:37:59.511429  407144 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0419 20:37:59.511434  407144 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0419 20:37:59.511441  407144 command_runner.go:130] > # metrics_key = ""
	I0419 20:37:59.511446  407144 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0419 20:37:59.511453  407144 command_runner.go:130] > [crio.tracing]
	I0419 20:37:59.511458  407144 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0419 20:37:59.511464  407144 command_runner.go:130] > # enable_tracing = false
	I0419 20:37:59.511469  407144 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0419 20:37:59.511476  407144 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0419 20:37:59.511482  407144 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0419 20:37:59.511490  407144 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0419 20:37:59.511495  407144 command_runner.go:130] > # CRI-O NRI configuration.
	I0419 20:37:59.511501  407144 command_runner.go:130] > [crio.nri]
	I0419 20:37:59.511506  407144 command_runner.go:130] > # Globally enable or disable NRI.
	I0419 20:37:59.511511  407144 command_runner.go:130] > # enable_nri = false
	I0419 20:37:59.511519  407144 command_runner.go:130] > # NRI socket to listen on.
	I0419 20:37:59.511526  407144 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0419 20:37:59.511531  407144 command_runner.go:130] > # NRI plugin directory to use.
	I0419 20:37:59.511538  407144 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0419 20:37:59.511542  407144 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0419 20:37:59.511549  407144 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0419 20:37:59.511554  407144 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0419 20:37:59.511561  407144 command_runner.go:130] > # nri_disable_connections = false
	I0419 20:37:59.511566  407144 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0419 20:37:59.511573  407144 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0419 20:37:59.511578  407144 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0419 20:37:59.511584  407144 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0419 20:37:59.511590  407144 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0419 20:37:59.511595  407144 command_runner.go:130] > [crio.stats]
	I0419 20:37:59.511601  407144 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0419 20:37:59.511608  407144 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0419 20:37:59.511615  407144 command_runner.go:130] > # stats_collection_period = 0
	I0419 20:37:59.511750  407144 cni.go:84] Creating CNI manager for ""
	I0419 20:37:59.511764  407144 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0419 20:37:59.511777  407144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:37:59.511806  407144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-151935 NodeName:multinode-151935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:37:59.511938  407144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-151935"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:37:59.512002  407144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:37:59.523409  407144 command_runner.go:130] > kubeadm
	I0419 20:37:59.523427  407144 command_runner.go:130] > kubectl
	I0419 20:37:59.523430  407144 command_runner.go:130] > kubelet
	I0419 20:37:59.523455  407144 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:37:59.523503  407144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:37:59.534698  407144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0419 20:37:59.552173  407144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:37:59.570130  407144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0419 20:37:59.587269  407144 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0419 20:37:59.591269  407144 command_runner.go:130] > 192.168.39.193	control-plane.minikube.internal
	I0419 20:37:59.591355  407144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:37:59.731442  407144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:37:59.747761  407144 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935 for IP: 192.168.39.193
	I0419 20:37:59.747781  407144 certs.go:194] generating shared ca certs ...
	I0419 20:37:59.747797  407144 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:37:59.747948  407144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:37:59.747996  407144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:37:59.748007  407144 certs.go:256] generating profile certs ...
	I0419 20:37:59.748089  407144 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/client.key
	I0419 20:37:59.748148  407144 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key.e4fd995d
	I0419 20:37:59.748184  407144 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key
	I0419 20:37:59.748197  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0419 20:37:59.748212  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0419 20:37:59.748224  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0419 20:37:59.748236  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0419 20:37:59.748249  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0419 20:37:59.748261  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0419 20:37:59.748273  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0419 20:37:59.748288  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0419 20:37:59.748343  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:37:59.748376  407144 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:37:59.748391  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:37:59.748414  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:37:59.748439  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:37:59.748459  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:37:59.748493  407144 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:37:59.748518  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:37:59.748531  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem -> /usr/share/ca-certificates/373998.pem
	I0419 20:37:59.748543  407144 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> /usr/share/ca-certificates/3739982.pem
	I0419 20:37:59.749421  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:37:59.774230  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:37:59.798630  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:37:59.824171  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:37:59.848364  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0419 20:37:59.872223  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:37:59.897063  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:37:59.921583  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/multinode-151935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:37:59.945811  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:37:59.970281  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:37:59.994835  407144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:38:00.033516  407144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:38:00.063556  407144 ssh_runner.go:195] Run: openssl version
	I0419 20:38:00.069802  407144 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0419 20:38:00.070076  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:38:00.082505  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087294  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087458  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.087529  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:38:00.093481  407144 command_runner.go:130] > 3ec20f2e
	I0419 20:38:00.093672  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:38:00.105374  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:38:00.117722  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122531  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122568  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.122619  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:38:00.128767  407144 command_runner.go:130] > b5213941
	I0419 20:38:00.128953  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:38:00.140337  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:38:00.152538  407144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157285  407144 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157440  407144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.157509  407144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:38:00.163441  407144 command_runner.go:130] > 51391683
	I0419 20:38:00.163515  407144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:38:00.174520  407144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:38:00.179585  407144 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:38:00.179616  407144 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0419 20:38:00.179623  407144 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0419 20:38:00.179629  407144 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0419 20:38:00.179639  407144 command_runner.go:130] > Access: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179645  407144 command_runner.go:130] > Modify: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179651  407144 command_runner.go:130] > Change: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179659  407144 command_runner.go:130] >  Birth: 2024-04-19 20:31:47.281499652 +0000
	I0419 20:38:00.179719  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:38:00.186138  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.186264  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:38:00.192508  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.192713  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:38:00.198887  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.198984  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:38:00.205720  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.205792  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:38:00.212084  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.212158  407144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:38:00.218077  407144 command_runner.go:130] > Certificate will not expire
	I0419 20:38:00.218267  407144 kubeadm.go:391] StartCluster: {Name:multinode-151935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-151935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.219 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:38:00.218429  407144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:38:00.218478  407144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:38:00.260412  407144 command_runner.go:130] > ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8
	I0419 20:38:00.260436  407144 command_runner.go:130] > 9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a
	I0419 20:38:00.260442  407144 command_runner.go:130] > 89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8
	I0419 20:38:00.260449  407144 command_runner.go:130] > 24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b
	I0419 20:38:00.260454  407144 command_runner.go:130] > 1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3
	I0419 20:38:00.260460  407144 command_runner.go:130] > 81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec
	I0419 20:38:00.260465  407144 command_runner.go:130] > 9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027
	I0419 20:38:00.260471  407144 command_runner.go:130] > 3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b
	I0419 20:38:00.260487  407144 cri.go:89] found id: "ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8"
	I0419 20:38:00.260494  407144 cri.go:89] found id: "9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a"
	I0419 20:38:00.260497  407144 cri.go:89] found id: "89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8"
	I0419 20:38:00.260500  407144 cri.go:89] found id: "24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b"
	I0419 20:38:00.260503  407144 cri.go:89] found id: "1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3"
	I0419 20:38:00.260506  407144 cri.go:89] found id: "81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec"
	I0419 20:38:00.260509  407144 cri.go:89] found id: "9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027"
	I0419 20:38:00.260512  407144 cri.go:89] found id: "3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b"
	I0419 20:38:00.260514  407144 cri.go:89] found id: ""
	I0419 20:38:00.260561  407144 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.451884770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559308451859146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f267830-275b-4a12-b055-34e3b8bf5f0e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.452368126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca75feef-7e42-4524-b412-bc2a92b4b8da name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.452536962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca75feef-7e42-4524-b412-bc2a92b4b8da name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.452935565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca75feef-7e42-4524-b412-bc2a92b4b8da name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.506285576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f25366a-c14d-49ab-957b-64226cb95f94 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.506364790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f25366a-c14d-49ab-957b-64226cb95f94 name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.508234605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45de3535-33cc-4154-8932-673a7033449f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.508931565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559308508898385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45de3535-33cc-4154-8932-673a7033449f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.510048803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d818928-9300-4115-a6d6-6aec02672037 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.510227115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d818928-9300-4115-a6d6-6aec02672037 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.510829496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d818928-9300-4115-a6d6-6aec02672037 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.556706173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1042d01f-7ae5-4a97-9166-c78721592a4a name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.556813452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1042d01f-7ae5-4a97-9166-c78721592a4a name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.558038736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc0f26d7-ebe7-412a-92cc-a0312afb77ad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.558459391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559308558436753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc0f26d7-ebe7-412a-92cc-a0312afb77ad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.559133119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=794ef811-c27a-4723-b06f-93f5afde22a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.559210538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=794ef811-c27a-4723-b06f-93f5afde22a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.559554370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=794ef811-c27a-4723-b06f-93f5afde22a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.603536579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5aa9f04-41aa-4264-82d9-a0220dcc0c7e name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.603630454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5aa9f04-41aa-4264-82d9-a0220dcc0c7e name=/runtime.v1.RuntimeService/Version
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.604573202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76f33ebf-a4d7-4d7c-8771-6bd620ce7c6b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.605221728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713559308605173069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76f33ebf-a4d7-4d7c-8771-6bd620ce7c6b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.605847710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11c5e891-265e-46bb-bdf0-541132e0ab6c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.605926685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11c5e891-265e-46bb-bdf0-541132e0ab6c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 20:41:48 multinode-151935 crio[2862]: time="2024-04-19 20:41:48.606450911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f6ce16249835a09e221e3489920a6e44f0731e7fbcb4de72956f9996d7dbfd5,PodSandboxId:6e273ec8d1d9a6ebf4ff43e5d8c6bec32e690c1331ad4fa36fe02475cb7bee39,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713559121283632798,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34,PodSandboxId:56fac3785a6d63904d719a28eafe17d817055abc6bfbeef5f9837881968b7904,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713559087762251903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1,PodSandboxId:ee62b2f1b19e25a2a8834f5613c13c77ff59e1932a493f19f155e69efb466e9c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713559087776613481,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a264fca-0e90-4c53-a0e8-baffa
a4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adaf22179ac8b1b699e84c8199bcd18ed3950a481e2f11d444b01c239ce8bf4a,PodSandboxId:158ff93c291536529c680402e7335482e2dd64dc419272c42737fd5ea0a5e682,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713559087695712917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},An
notations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51,PodSandboxId:e600f5c04a75c0d182f4faad3a60664cfcb64331cab4feb9170a7e326d6dcfb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713559087604073894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25,},Annotations:map[string]string{io.ku
bernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f,PodSandboxId:ece428f85ee15cb1e0e3a89e20ec98c27cacd149b198f3d04df300611d3d9a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713559082797930677,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064,PodSandboxId:8028d39464849d5c8bd3baac6d8f5bf2cccd6be84515dd5608f5a4c240299b20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713559082780495150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.container.hash: f00f051a,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a,PodSandboxId:79119ff513a4d1f40f4e2bd6da7404146113ee3acdf4a0c49e4adfc406303895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713559082707340741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0,PodSandboxId:7735d89a349fa7fa1baeef39c9f773f25143cadc3bc50b5964974c5a863b9ff9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713559082657770241,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4376dcb9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81914a16099f9a1e0706dc45959bb1c3a02dc413419a5401351bcb4f6ceda517,PodSandboxId:8b5dc1eab8597aad4e585ba0f29b5e0e16a7ad3b2bded72bb1ea4dfdb88cda1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713558779665876319,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-f2s7v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 882e1af3-63cf-42b9-ae3b-2ea2280ff033,},Annotations:map[string]string{io.kubernetes.container.hash: 21068a3b,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8,PodSandboxId:f1046615fbdaf2d6de65438ba83e27e716adc9eb1d6d9760112f52d4b9e5385c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713558733277095475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ncj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff591f-c922-499e-b7a3-b79db23598bb,},Annotations:map[string]string{io.kubernetes.container.hash: b16a48de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9effe8852fc9f4094da005da55010d36451bea449d5fb0276eadd8aa811ee50a,PodSandboxId:91a88e787bfd324eb3e6eff874ffece2658e4de8bbcb5194c5cff741c3853fe8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713558732333158251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ac701485-7f72-481a-8bd5-2e40f0685d63,},Annotations:map[string]string{io.kubernetes.container.hash: cfebf487,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8,PodSandboxId:536bb6456a58387e37ee3e79aebfc74c0ed71845976fe89d0f52c7a7ccbcc43c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713558731023442670,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mgj2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a264fca-0e90-4c53-a0e8-baffaa4a5f1d,},Annotations:map[string]string{io.kubernetes.container.hash: e0239aaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b,PodSandboxId:2309b7e0018ff90e7cf36e27faf2ca757e1ec4712a4699533fbf9f8442a64ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713558730841559837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfnc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dbd4cb1-46ef-40cf-a2eb
-fe9022d29c25,},Annotations:map[string]string{io.kubernetes.container.hash: 62fafd31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3,PodSandboxId:99f0597788a12c9d23fe008934081686c25d0963cd8470ac829aa9ac883ba461,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713558710649911716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba6a20a39984b4a10e34ef81c4d22f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec,PodSandboxId:4fa61e482001282eac35b156b636aba72321f1e294d5f6bbfeeb1d0098c91289,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713558710620690625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd59cc3611697b4e67721c9ef5d2612d,},Annotations:map[string]string{io.kubernetes.
container.hash: f00f051a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027,PodSandboxId:b7de6532ad4eb40c2c3c1816ed8ec936a5a720a649feafc6ddd0fc177e1aca27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713558710574434886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178a945acd4515fc2188d01a47409bb5,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b,PodSandboxId:7a03fdbb6f8060c955f79248c2fa41f4a1dbc0960241140ea192c48967c14956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713558710541311025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a85e252d857b52aeb23874c93d775d5,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4376dcb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11c5e891-265e-46bb-bdf0-541132e0ab6c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f6ce16249835       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6e273ec8d1d9a       busybox-fc5497c4f-f2s7v
	6de66ff0d75df       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   ee62b2f1b19e2       kindnet-mgj2r
	d00f94da13776       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   56fac3785a6d6       coredns-7db6d8ff4d-7ncj2
	adaf22179ac8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   158ff93c29153       storage-provisioner
	c4cd494c894c1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   e600f5c04a75c       kube-proxy-pfnc8
	da54f2104a830       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   ece428f85ee15       kube-scheduler-multinode-151935
	b6bd063626d64       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   8028d39464849       etcd-multinode-151935
	5e5dc0ad75a91       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   79119ff513a4d       kube-controller-manager-multinode-151935
	1bbb8b32a56a4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   7735d89a349fa       kube-apiserver-multinode-151935
	81914a16099f9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   8b5dc1eab8597       busybox-fc5497c4f-f2s7v
	ae6c0e5292985       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   f1046615fbdaf       coredns-7db6d8ff4d-7ncj2
	9effe8852fc9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   91a88e787bfd3       storage-provisioner
	89d6ab542b25d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   536bb6456a583       kindnet-mgj2r
	24ecb604c74da       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   2309b7e0018ff       kube-proxy-pfnc8
	1e419925fabf2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      9 minutes ago       Exited              kube-scheduler            0                   99f0597788a12       kube-scheduler-multinode-151935
	81e13f7892581       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   4fa61e4820012       etcd-multinode-151935
	9f504fd220a12       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      9 minutes ago       Exited              kube-controller-manager   0                   b7de6532ad4eb       kube-controller-manager-multinode-151935
	3db906bb1d4a7       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      9 minutes ago       Exited              kube-apiserver            0                   7a03fdbb6f806       kube-apiserver-multinode-151935
	
	
	==> coredns [ae6c0e529298575dcd085f7872eabc95669650229891f6017abb3d3926d93fc8] <==
	[INFO] 10.244.1.2:55673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001780232s
	[INFO] 10.244.1.2:42779 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121965s
	[INFO] 10.244.1.2:54976 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087587s
	[INFO] 10.244.1.2:54596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001056972s
	[INFO] 10.244.1.2:49581 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128884s
	[INFO] 10.244.1.2:53346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124796s
	[INFO] 10.244.1.2:60576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082191s
	[INFO] 10.244.0.3:53008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073484s
	[INFO] 10.244.0.3:37312 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049401s
	[INFO] 10.244.0.3:53048 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066487s
	[INFO] 10.244.0.3:56090 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039983s
	[INFO] 10.244.1.2:58089 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182872s
	[INFO] 10.244.1.2:37086 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104024s
	[INFO] 10.244.1.2:40482 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093839s
	[INFO] 10.244.1.2:49147 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123093s
	[INFO] 10.244.0.3:49920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118155s
	[INFO] 10.244.0.3:53138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103693s
	[INFO] 10.244.0.3:34890 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083491s
	[INFO] 10.244.0.3:51926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115206s
	[INFO] 10.244.1.2:46032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.1.2:57913 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095258s
	[INFO] 10.244.1.2:45352 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179796s
	[INFO] 10.244.1.2:47384 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089741s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d00f94da137767b0a453c9b511cc9d444b3d681ecde0d365231e7f9eb9027f34] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41999 - 13478 "HINFO IN 101043259357947176.8207489402340935840. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013216151s
	
	
	==> describe nodes <==
	Name:               multinode-151935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=multinode-151935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_31_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:31:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151935
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:41:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:31:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:38:06 +0000   Fri, 19 Apr 2024 20:32:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-151935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb916782ac324c04b03ac6d164cc3d51
	  System UUID:                cb916782-ac32-4c04-b03a-c6d164cc3d51
	  Boot ID:                    21d22713-d4ba-4521-b0fa-24d0e20f332c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2s7v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m52s
	  kube-system                 coredns-7db6d8ff4d-7ncj2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m39s
	  kube-system                 etcd-multinode-151935                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m53s
	  kube-system                 kindnet-mgj2r                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m39s
	  kube-system                 kube-apiserver-multinode-151935             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-controller-manager-multinode-151935    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-proxy-pfnc8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-scheduler-multinode-151935             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m53s                  kubelet          Node multinode-151935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m53s                  kubelet          Node multinode-151935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s                  kubelet          Node multinode-151935 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m40s                  node-controller  Node multinode-151935 event: Registered Node multinode-151935 in Controller
	  Normal  NodeReady                9m37s                  kubelet          Node multinode-151935 status is now: NodeReady
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node multinode-151935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node multinode-151935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node multinode-151935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-151935 event: Registered Node multinode-151935 in Controller
	
	
	Name:               multinode-151935-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=multinode-151935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_19T20_38_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:38:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 20:39:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:40:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:40:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:40:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Apr 2024 20:39:14 +0000   Fri, 19 Apr 2024 20:40:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    multinode-151935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85e078ecfcc44c72b0c2735fb2a58458
	  System UUID:                85e078ec-fcc4-4c72-b0c2-735fb2a58458
	  Boot ID:                    acf7c25a-2ae1-4cfb-acec-9349d36a9a2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zkwq6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-v9lfd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m3s
	  kube-system                 kube-proxy-mb775           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  Starting                 8m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m3s (x2 over 9m3s)  kubelet          Node multinode-151935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m3s (x2 over 9m3s)  kubelet          Node multinode-151935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m3s (x2 over 9m3s)  kubelet          Node multinode-151935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m54s                kubelet          Node multinode-151935-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)  kubelet          Node multinode-151935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)  kubelet          Node multinode-151935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)  kubelet          Node multinode-151935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m56s                kubelet          Node multinode-151935-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                  node-controller  Node multinode-151935-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060674] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067881] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.168558] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.148384] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.296625] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.486032] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.057124] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.288351] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.959622] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.084938] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.091112] kauditd_printk_skb: 30 callbacks suppressed
	[Apr19 20:32] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.134814] kauditd_printk_skb: 21 callbacks suppressed
	[ +47.877989] kauditd_printk_skb: 84 callbacks suppressed
	[Apr19 20:37] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.145625] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.179886] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.154164] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.290106] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.748943] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[Apr19 20:38] systemd-fstab-generator[3071]: Ignoring "noauto" option for root device
	[  +5.734286] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.162784] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[  +0.110444] kauditd_printk_skb: 32 callbacks suppressed
	[ +22.465643] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [81e13f7892581a0cff3e09b21dfda205dc8be89a546793e53803d46f9c592fec] <==
	{"level":"info","ts":"2024-04-19T20:32:51.149116Z","caller":"traceutil/trace.go:171","msg":"trace[1960558022] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"263.684207ms","start":"2024-04-19T20:32:50.885353Z","end":"2024-04-19T20:32:51.149037Z","steps":["trace[1960558022] 'process raft request'  (duration: 198.113813ms)","trace[1960558022] 'compare'  (duration: 65.410551ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:32.563288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.668407ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517751175823665660 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151935-m03.17c7c8a933b5563c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151935-m03.17c7c8a933b5563c\" value_size:642 lease:1294379138968889611 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-19T20:33:32.563831Z","caller":"traceutil/trace.go:171","msg":"trace[499971293] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:613; }","duration":"170.950638ms","start":"2024-04-19T20:33:32.392853Z","end":"2024-04-19T20:33:32.563803Z","steps":["trace[499971293] 'read index received'  (duration: 170.37593ms)","trace[499971293] 'applied index is now lower than readState.Index'  (duration: 574.053µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-19T20:33:32.564013Z","caller":"traceutil/trace.go:171","msg":"trace[716437160] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"205.842579ms","start":"2024-04-19T20:33:32.358105Z","end":"2024-04-19T20:33:32.563948Z","steps":["trace[716437160] 'process raft request'  (duration: 205.618467ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:33:32.564376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.510753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-19T20:33:32.564486Z","caller":"traceutil/trace.go:171","msg":"trace[7069641] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:582; }","duration":"171.695105ms","start":"2024-04-19T20:33:32.392774Z","end":"2024-04-19T20:33:32.564469Z","steps":["trace[7069641] 'agreement among raft nodes before linearized reading'  (duration: 171.500362ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:32.56605Z","caller":"traceutil/trace.go:171","msg":"trace[708220390] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"250.67648ms","start":"2024-04-19T20:33:32.313267Z","end":"2024-04-19T20:33:32.563943Z","steps":["trace[708220390] 'process raft request'  (duration: 72.575873ms)","trace[708220390] 'compare'  (duration: 176.583148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:37.474154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.239102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151935-m03\" ","response":"range_response_count:1 size:3030"}
	{"level":"info","ts":"2024-04-19T20:33:37.474622Z","caller":"traceutil/trace.go:171","msg":"trace[1654905082] range","detail":"{range_begin:/registry/minions/multinode-151935-m03; range_end:; response_count:1; response_revision:617; }","duration":"109.614708ms","start":"2024-04-19T20:33:37.364857Z","end":"2024-04-19T20:33:37.474472Z","steps":["trace[1654905082] 'range keys from in-memory index tree'  (duration: 109.060984ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.747012Z","caller":"traceutil/trace.go:171","msg":"trace[239417769] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"195.240305ms","start":"2024-04-19T20:33:37.551706Z","end":"2024-04-19T20:33:37.746947Z","steps":["trace[239417769] 'process raft request'  (duration: 195.067773ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.746947Z","caller":"traceutil/trace.go:171","msg":"trace[1071672218] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:654; }","duration":"122.514345ms","start":"2024-04-19T20:33:37.624409Z","end":"2024-04-19T20:33:37.746923Z","steps":["trace[1071672218] 'read index received'  (duration: 122.330153ms)","trace[1071672218] 'applied index is now lower than readState.Index'  (duration: 182.871µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:33:37.748032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.598935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151935-m03\" ","response":"range_response_count:1 size:3030"}
	{"level":"info","ts":"2024-04-19T20:33:37.748105Z","caller":"traceutil/trace.go:171","msg":"trace[31971872] range","detail":"{range_begin:/registry/minions/multinode-151935-m03; range_end:; response_count:1; response_revision:618; }","duration":"123.703811ms","start":"2024-04-19T20:33:37.624385Z","end":"2024-04-19T20:33:37.748089Z","steps":["trace[31971872] 'agreement among raft nodes before linearized reading'  (duration: 122.653469ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:33:37.937297Z","caller":"traceutil/trace.go:171","msg":"trace[740861935] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"125.749175ms","start":"2024-04-19T20:33:37.811528Z","end":"2024-04-19T20:33:37.937277Z","steps":["trace[740861935] 'process raft request'  (duration: 125.640378ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-19T20:36:26.750233Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-19T20:36:26.750396Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-151935","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	{"level":"warn","ts":"2024-04-19T20:36:26.750489Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:36:26.750572Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/04/19 20:36:26 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-19T20:36:26.802823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-19T20:36:26.803334Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-19T20:36:26.804685Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97ba5874d4d591f6","current-leader-member-id":"97ba5874d4d591f6"}
	{"level":"info","ts":"2024-04-19T20:36:26.806886Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:36:26.807127Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:36:26.807174Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-151935","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	
	
	==> etcd [b6bd063626d6420ba255d53f2781d07fdc51d64522145633da5aac7530f16064] <==
	{"level":"info","ts":"2024-04-19T20:38:03.237389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:38:03.237403Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:38:03.237721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 switched to configuration voters=(10933148304205517302)"}
	{"level":"info","ts":"2024-04-19T20:38:03.2378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","added-peer-id":"97ba5874d4d591f6","added-peer-peer-urls":["https://192.168.39.193:2380"]}
	{"level":"info","ts":"2024-04-19T20:38:03.237928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:38:03.238031Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:38:03.278249Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T20:38:03.27858Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:38:03.278652Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:38:03.278899Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:38:03.281Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-04-19T20:38:04.974378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.974439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.974491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-04-19T20:38:04.97452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.974544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-04-19T20:38:04.980075Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:multinode-151935 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:38:04.980091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:38:04.980193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:38:04.980601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T20:38:04.980669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-19T20:38:04.982507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"info","ts":"2024-04-19T20:38:04.982564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:41:49 up 10 min,  0 users,  load average: 0.17, 0.13, 0.08
	Linux multinode-151935 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6de66ff0d75dfb5e86a6e7a14b78525262bce1b3229c8474bda4e6bf0d1468a1] <==
	I0419 20:40:48.847722       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:40:58.853056       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:40:58.853103       1 main.go:227] handling current node
	I0419 20:40:58.853114       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:40:58.853120       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:41:08.858151       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:41:08.858256       1 main.go:227] handling current node
	I0419 20:41:08.858279       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:41:08.858296       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:41:18.871610       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:41:18.871740       1 main.go:227] handling current node
	I0419 20:41:18.871772       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:41:18.871791       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:41:28.878926       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:41:28.879091       1 main.go:227] handling current node
	I0419 20:41:28.879117       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:41:28.879137       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:41:38.884552       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:41:38.884606       1 main.go:227] handling current node
	I0419 20:41:38.884617       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:41:38.884624       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:41:48.898800       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:41:48.898850       1 main.go:227] handling current node
	I0419 20:41:48.898861       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:41:48.898867       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [89d6ab542b25de98bfc8e7a206f93ac81c82e1b42552c3164fe15c84faeb3fa8] <==
	I0419 20:35:41.908446       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:35:51.921307       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:35:51.921478       1 main.go:227] handling current node
	I0419 20:35:51.921509       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:35:51.921529       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:35:51.921672       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:35:51.921694       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:01.932663       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:01.932901       1 main.go:227] handling current node
	I0419 20:36:01.932941       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:01.933053       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:01.933258       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:01.933313       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:11.941239       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:11.941467       1 main.go:227] handling current node
	I0419 20:36:11.941575       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:11.941600       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:11.941746       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:11.941771       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	I0419 20:36:21.947833       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0419 20:36:21.947929       1 main.go:227] handling current node
	I0419 20:36:21.948014       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0419 20:36:21.948046       1 main.go:250] Node multinode-151935-m02 has CIDR [10.244.1.0/24] 
	I0419 20:36:21.948205       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0419 20:36:21.948234       1 main.go:250] Node multinode-151935-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1bbb8b32a56a4a975149fa0f106299e5fae0a887d06abeee191eaeaee2a813c0] <==
	I0419 20:38:06.272036       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0419 20:38:06.464023       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:38:06.464132       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 20:38:06.464160       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 20:38:06.465046       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:38:06.465829       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 20:38:06.466059       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 20:38:06.470831       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 20:38:06.472027       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0419 20:38:06.472095       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 20:38:06.472127       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:38:06.472149       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:38:06.472172       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:38:06.472195       1 cache.go:39] Caches are synced for autoregister controller
	I0419 20:38:06.475028       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:38:06.475060       1 policy_source.go:224] refreshing policies
	I0419 20:38:06.476511       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:38:07.273070       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 20:38:08.966532       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:38:09.084206       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 20:38:09.096217       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:38:09.169424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 20:38:09.175810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 20:38:19.068063       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 20:38:19.117446       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3db906bb1d4a76649a77784bfe8a7880e12e1a0b90c667b22cef30d142b10b9b] <==
	I0419 20:36:26.759744       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0419 20:36:26.759983       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0419 20:36:26.762591       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0419 20:36:26.764248       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764337       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764372       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.764406       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0419 20:36:26.765369       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765415       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765426       1 controller.go:176] quota evaluator worker shutdown
	I0419 20:36:26.765433       1 controller.go:176] quota evaluator worker shutdown
	E0419 20:36:26.767324       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767378       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767413       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767446       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767482       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767516       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.767529       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771559       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771637       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771644       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771692       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771669       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771740       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0419 20:36:26.771773       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [5e5dc0ad75a911f6324fc7ed9aa1eb1e3d8bcdcb296d853dc073a8e0199dcd0a] <==
	I0419 20:38:43.647296       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:38:45.521388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.261µs"
	I0419 20:38:45.576323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.904µs"
	I0419 20:38:45.593157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.085µs"
	I0419 20:38:45.610318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.332µs"
	I0419 20:38:45.620411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.779µs"
	I0419 20:38:45.624344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.061µs"
	I0419 20:38:49.844802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.445µs"
	I0419 20:38:52.523220       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:38:52.543417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.724µs"
	I0419 20:38:52.557866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.917µs"
	I0419 20:38:55.951540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.101942ms"
	I0419 20:38:55.951781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.129µs"
	I0419 20:39:11.041103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:39:12.076511       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:39:12.077148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:39:12.090776       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:39:21.262847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:39:27.115917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:40:09.133876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.633102ms"
	I0419 20:40:09.141269       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="510.857µs"
	I0419 20:40:19.018622       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-b448r"
	I0419 20:40:19.041275       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-b448r"
	I0419 20:40:19.041319       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z6zkf"
	I0419 20:40:19.075774       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z6zkf"
	
	
	==> kube-controller-manager [9f504fd220a128e55b6645470ec480d5488602bcd7c12fbf65aec3e91dd6b027] <==
	I0419 20:32:45.803048       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m02" podCIDRs=["10.244.1.0/24"]
	I0419 20:32:48.479454       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151935-m02"
	I0419 20:32:54.606341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:32:56.882556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.3739ms"
	I0419 20:32:56.893401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.739588ms"
	I0419 20:32:56.898274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.208µs"
	I0419 20:32:56.914704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.244µs"
	I0419 20:32:56.931809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.021µs"
	I0419 20:33:00.156434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.739255ms"
	I0419 20:33:00.156527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.809µs"
	I0419 20:33:00.727290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.931358ms"
	I0419 20:33:00.727427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.608µs"
	I0419 20:33:32.569916       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:33:32.570114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:33:32.610336       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.2.0/24"]
	I0419 20:33:33.500891       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151935-m03"
	I0419 20:33:42.143153       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:11.587416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:12.788306       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151935-m03\" does not exist"
	I0419 20:34:12.788855       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:12.796538       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151935-m03" podCIDRs=["10.244.3.0/24"]
	I0419 20:34:20.899683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m02"
	I0419 20:34:58.551807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151935-m03"
	I0419 20:34:58.598115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.440083ms"
	I0419 20:34:58.598363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.141µs"
	
	
	==> kube-proxy [24ecb604c74dafa0746f94f789b7d0ea265c64e0682d8fe216e195afbab62b4b] <==
	I0419 20:32:10.970926       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:32:10.979351       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0419 20:32:11.042429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:32:11.043069       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:32:11.043120       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:32:11.048887       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:32:11.049175       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:32:11.049218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:32:11.051700       1 config.go:192] "Starting service config controller"
	I0419 20:32:11.051743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:32:11.051764       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:32:11.051768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:32:11.053742       1 config.go:319] "Starting node config controller"
	I0419 20:32:11.053775       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:32:11.151886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:32:11.151890       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:32:11.153929       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4cd494c894c17c2a4daf1058236d59c495dda233ae2bd1bbf6892c4aaf26e51] <==
	I0419 20:38:07.983932       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:38:08.013636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0419 20:38:08.088850       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:38:08.089046       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:38:08.089136       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:38:08.097460       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:38:08.097654       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:38:08.097691       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:38:08.103272       1 config.go:192] "Starting service config controller"
	I0419 20:38:08.103306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:38:08.103335       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:38:08.103339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:38:08.103748       1 config.go:319] "Starting node config controller"
	I0419 20:38:08.104021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:38:08.204364       1 shared_informer.go:320] Caches are synced for node config
	I0419 20:38:08.204412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:38:08.204520       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1e419925fabf28877b9458029daca0b577eec0b6a4a50349f78e831d57cb74a3] <==
	E0419 20:31:53.115807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:53.115934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0419 20:31:53.116041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0419 20:31:53.120150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:31:53.120266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:31:53.968435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0419 20:31:53.968507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0419 20:31:53.981803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0419 20:31:53.981853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0419 20:31:54.128507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0419 20:31:54.128599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0419 20:31:54.133702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 20:31:54.133840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 20:31:54.201290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0419 20:31:54.201362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0419 20:31:54.204575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0419 20:31:54.204630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:54.210685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 20:31:54.210773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 20:31:54.227503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 20:31:54.227615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 20:31:54.235122       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0419 20:31:54.236177       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0419 20:31:57.200793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0419 20:36:26.748652       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da54f2104a830dd6bb8a68861ca87d9d43152d76a294cf0e2ecacc078f76178f] <==
	I0419 20:38:03.848100       1 serving.go:380] Generated self-signed cert in-memory
	W0419 20:38:06.345899       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0419 20:38:06.346053       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 20:38:06.346087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0419 20:38:06.346171       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 20:38:06.391666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 20:38:06.391721       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:38:06.398848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 20:38:06.401563       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 20:38:06.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 20:38:06.401667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:38:06.501925       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.977838    3078 topology_manager.go:215] "Topology Admit Handler" podUID="ac701485-7f72-481a-8bd5-2e40f0685d63" podNamespace="kube-system" podName="storage-provisioner"
	Apr 19 20:38:06 multinode-151935 kubelet[3078]: I0419 20:38:06.978019    3078 topology_manager.go:215] "Topology Admit Handler" podUID="882e1af3-63cf-42b9-ae3b-2ea2280ff033" podNamespace="default" podName="busybox-fc5497c4f-f2s7v"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.004285    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.067630    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-xtables-lock\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068061    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25-lib-modules\") pod \"kube-proxy-pfnc8\" (UID: \"4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25\") " pod="kube-system/kube-proxy-pfnc8"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068207    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac701485-7f72-481a-8bd5-2e40f0685d63-tmp\") pod \"storage-provisioner\" (UID: \"ac701485-7f72-481a-8bd5-2e40f0685d63\") " pod="kube-system/storage-provisioner"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068373    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-lib-modules\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068433    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25-xtables-lock\") pod \"kube-proxy-pfnc8\" (UID: \"4dbd4cb1-46ef-40cf-a2eb-fe9022d29c25\") " pod="kube-system/kube-proxy-pfnc8"
	Apr 19 20:38:07 multinode-151935 kubelet[3078]: I0419 20:38:07.068485    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a264fca-0e90-4c53-a0e8-baffaa4a5f1d-cni-cfg\") pod \"kindnet-mgj2r\" (UID: \"6a264fca-0e90-4c53-a0e8-baffaa4a5f1d\") " pod="kube-system/kindnet-mgj2r"
	Apr 19 20:38:15 multinode-151935 kubelet[3078]: I0419 20:38:15.730154    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 19 20:39:02 multinode-151935 kubelet[3078]: E0419 20:39:02.075185    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:39:02 multinode-151935 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:40:02 multinode-151935 kubelet[3078]: E0419 20:40:02.071947    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:40:02 multinode-151935 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:40:02 multinode-151935 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:40:02 multinode-151935 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:40:02 multinode-151935 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 19 20:41:02 multinode-151935 kubelet[3078]: E0419 20:41:02.072931    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 19 20:41:02 multinode-151935 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 19 20:41:02 multinode-151935 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 19 20:41:02 multinode-151935 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 19 20:41:02 multinode-151935 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:41:48.135072  409020 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-151935 -n multinode-151935
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-151935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.57s)

                                                
                                    
x
+
TestPreload (275.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-047248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0419 20:47:10.227639  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-047248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m12.328575869s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-047248 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-047248 image pull gcr.io/k8s-minikube/busybox: (2.954382244s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-047248
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-047248: exit status 82 (2m0.495746789s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-047248"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-047248 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-19 20:49:54.500201768 +0000 UTC m=+5545.291718849
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-047248 -n test-preload-047248
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-047248 -n test-preload-047248: exit status 3 (18.59823497s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 20:50:13.093034  411842 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.160:22: connect: no route to host
	E0419 20:50:13.093056  411842 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.160:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-047248" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-047248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-047248
--- FAIL: TestPreload (275.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (417.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m5.314013669s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-270819] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-270819" primary control-plane node in "kubernetes-upgrade-270819" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:53:23.425436  413874 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:53:23.425699  413874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:53:23.425710  413874 out.go:304] Setting ErrFile to fd 2...
	I0419 20:53:23.425717  413874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:53:23.425909  413874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:53:23.426543  413874 out.go:298] Setting JSON to false
	I0419 20:53:23.427607  413874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9349,"bootTime":1713550654,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:53:23.427679  413874 start.go:139] virtualization: kvm guest
	I0419 20:53:23.430178  413874 out.go:177] * [kubernetes-upgrade-270819] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:53:23.431764  413874 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:53:23.433118  413874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:53:23.431701  413874 notify.go:220] Checking for updates...
	I0419 20:53:23.434586  413874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:53:23.436190  413874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:53:23.437471  413874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:53:23.438699  413874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:53:23.440494  413874 config.go:182] Loaded profile config "NoKubernetes-097851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:53:23.440598  413874 config.go:182] Loaded profile config "cert-expiration-198159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:53:23.440745  413874 config.go:182] Loaded profile config "offline-crio-102119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:53:23.440844  413874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:53:23.476187  413874 out.go:177] * Using the kvm2 driver based on user configuration
	I0419 20:53:23.477471  413874 start.go:297] selected driver: kvm2
	I0419 20:53:23.477484  413874 start.go:901] validating driver "kvm2" against <nil>
	I0419 20:53:23.477497  413874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:53:23.478215  413874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:53:23.478300  413874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:53:23.493276  413874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:53:23.493325  413874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 20:53:23.493535  413874 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 20:53:23.493587  413874 cni.go:84] Creating CNI manager for ""
	I0419 20:53:23.493600  413874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:53:23.493606  413874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 20:53:23.493660  413874 start.go:340] cluster config:
	{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:53:23.493772  413874 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:53:23.495832  413874 out.go:177] * Starting "kubernetes-upgrade-270819" primary control-plane node in "kubernetes-upgrade-270819" cluster
	I0419 20:53:23.497112  413874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 20:53:23.497149  413874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:53:23.497167  413874 cache.go:56] Caching tarball of preloaded images
	I0419 20:53:23.497263  413874 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:53:23.497274  413874 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0419 20:53:23.497356  413874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/config.json ...
	I0419 20:53:23.497373  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/config.json: {Name:mkeb619f8164cafeb7f1faaf6580dd6e93c609c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:53:23.497493  413874 start.go:360] acquireMachinesLock for kubernetes-upgrade-270819: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:53:55.417766  413874 start.go:364] duration metric: took 31.920242755s to acquireMachinesLock for "kubernetes-upgrade-270819"
	I0419 20:53:55.417849  413874 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0419 20:53:55.417992  413874 start.go:125] createHost starting for "" (driver="kvm2")
	I0419 20:53:55.420453  413874 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 20:53:55.420718  413874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:53:55.420780  413874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:53:55.437694  413874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0419 20:53:55.438079  413874 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:53:55.438742  413874 main.go:141] libmachine: Using API Version  1
	I0419 20:53:55.438770  413874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:53:55.439214  413874 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:53:55.439440  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:53:55.439622  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:53:55.439814  413874 start.go:159] libmachine.API.Create for "kubernetes-upgrade-270819" (driver="kvm2")
	I0419 20:53:55.439842  413874 client.go:168] LocalClient.Create starting
	I0419 20:53:55.439875  413874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem
	I0419 20:53:55.439907  413874 main.go:141] libmachine: Decoding PEM data...
	I0419 20:53:55.439924  413874 main.go:141] libmachine: Parsing certificate...
	I0419 20:53:55.439973  413874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem
	I0419 20:53:55.439990  413874 main.go:141] libmachine: Decoding PEM data...
	I0419 20:53:55.440005  413874 main.go:141] libmachine: Parsing certificate...
	I0419 20:53:55.440021  413874 main.go:141] libmachine: Running pre-create checks...
	I0419 20:53:55.440034  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .PreCreateCheck
	I0419 20:53:55.440439  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetConfigRaw
	I0419 20:53:55.440893  413874 main.go:141] libmachine: Creating machine...
	I0419 20:53:55.440911  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .Create
	I0419 20:53:55.441069  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Creating KVM machine...
	I0419 20:53:55.442369  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found existing default KVM network
	I0419 20:53:55.443648  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.443463  414172 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:61:c1} reservation:<nil>}
	I0419 20:53:55.445105  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.445005  414172 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028a860}
	I0419 20:53:55.445131  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | created network xml: 
	I0419 20:53:55.445142  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | <network>
	I0419 20:53:55.445151  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   <name>mk-kubernetes-upgrade-270819</name>
	I0419 20:53:55.445168  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   <dns enable='no'/>
	I0419 20:53:55.445176  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   
	I0419 20:53:55.445186  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0419 20:53:55.445193  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |     <dhcp>
	I0419 20:53:55.445204  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0419 20:53:55.445219  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |     </dhcp>
	I0419 20:53:55.445228  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   </ip>
	I0419 20:53:55.445237  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG |   
	I0419 20:53:55.445245  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | </network>
	I0419 20:53:55.445252  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | 
	I0419 20:53:55.451007  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | trying to create private KVM network mk-kubernetes-upgrade-270819 192.168.50.0/24...
	I0419 20:53:55.525867  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting up store path in /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819 ...
	I0419 20:53:55.525903  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | private KVM network mk-kubernetes-upgrade-270819 192.168.50.0/24 created
	I0419 20:53:55.525917  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Building disk image from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 20:53:55.525946  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Downloading /home/jenkins/minikube-integration/18669-366597/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0419 20:53:55.525971  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.525792  414172 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:53:55.786906  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.786766  414172 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa...
	I0419 20:53:55.853137  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.852977  414172 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/kubernetes-upgrade-270819.rawdisk...
	I0419 20:53:55.853178  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Writing magic tar header
	I0419 20:53:55.853214  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Writing SSH key tar header
	I0419 20:53:55.853265  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:55.853118  414172 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819 ...
	I0419 20:53:55.853300  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819 (perms=drwx------)
	I0419 20:53:55.853315  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819
	I0419 20:53:55.853327  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube/machines (perms=drwxr-xr-x)
	I0419 20:53:55.853359  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597/.minikube (perms=drwxr-xr-x)
	I0419 20:53:55.853370  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube/machines
	I0419 20:53:55.853387  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:53:55.853396  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18669-366597
	I0419 20:53:55.853411  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins/minikube-integration/18669-366597 (perms=drwxrwxr-x)
	I0419 20:53:55.853440  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0419 20:53:55.853455  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0419 20:53:55.853464  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0419 20:53:55.853474  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home/jenkins
	I0419 20:53:55.853483  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Creating domain...
	I0419 20:53:55.853500  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Checking permissions on dir: /home
	I0419 20:53:55.853510  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Skipping /home - not owner
	I0419 20:53:55.854649  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) define libvirt domain using xml: 
	I0419 20:53:55.854681  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) <domain type='kvm'>
	I0419 20:53:55.854693  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <name>kubernetes-upgrade-270819</name>
	I0419 20:53:55.854700  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <memory unit='MiB'>2200</memory>
	I0419 20:53:55.854716  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <vcpu>2</vcpu>
	I0419 20:53:55.854728  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <features>
	I0419 20:53:55.854737  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <acpi/>
	I0419 20:53:55.854747  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <apic/>
	I0419 20:53:55.854756  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <pae/>
	I0419 20:53:55.854769  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     
	I0419 20:53:55.854782  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   </features>
	I0419 20:53:55.854793  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <cpu mode='host-passthrough'>
	I0419 20:53:55.854804  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   
	I0419 20:53:55.854811  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   </cpu>
	I0419 20:53:55.854823  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <os>
	I0419 20:53:55.854831  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <type>hvm</type>
	I0419 20:53:55.854841  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <boot dev='cdrom'/>
	I0419 20:53:55.854852  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <boot dev='hd'/>
	I0419 20:53:55.854866  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <bootmenu enable='no'/>
	I0419 20:53:55.854879  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   </os>
	I0419 20:53:55.854890  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   <devices>
	I0419 20:53:55.854901  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <disk type='file' device='cdrom'>
	I0419 20:53:55.854910  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/boot2docker.iso'/>
	I0419 20:53:55.854918  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <target dev='hdc' bus='scsi'/>
	I0419 20:53:55.854925  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <readonly/>
	I0419 20:53:55.854932  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </disk>
	I0419 20:53:55.854937  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <disk type='file' device='disk'>
	I0419 20:53:55.854946  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0419 20:53:55.854955  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <source file='/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/kubernetes-upgrade-270819.rawdisk'/>
	I0419 20:53:55.854964  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <target dev='hda' bus='virtio'/>
	I0419 20:53:55.854969  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </disk>
	I0419 20:53:55.854978  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <interface type='network'>
	I0419 20:53:55.855017  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <source network='mk-kubernetes-upgrade-270819'/>
	I0419 20:53:55.855045  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <model type='virtio'/>
	I0419 20:53:55.855056  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </interface>
	I0419 20:53:55.855067  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <interface type='network'>
	I0419 20:53:55.855080  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <source network='default'/>
	I0419 20:53:55.855090  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <model type='virtio'/>
	I0419 20:53:55.855102  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </interface>
	I0419 20:53:55.855113  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <serial type='pty'>
	I0419 20:53:55.855147  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <target port='0'/>
	I0419 20:53:55.855174  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </serial>
	I0419 20:53:55.855198  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <console type='pty'>
	I0419 20:53:55.855210  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <target type='serial' port='0'/>
	I0419 20:53:55.855221  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </console>
	I0419 20:53:55.855232  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     <rng model='virtio'>
	I0419 20:53:55.855254  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)       <backend model='random'>/dev/random</backend>
	I0419 20:53:55.855270  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     </rng>
	I0419 20:53:55.855282  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     
	I0419 20:53:55.855311  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)     
	I0419 20:53:55.855325  413874 main.go:141] libmachine: (kubernetes-upgrade-270819)   </devices>
	I0419 20:53:55.855332  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) </domain>
	I0419 20:53:55.855342  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) 
	I0419 20:53:55.862376  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:9b:6e:1b in network default
	I0419 20:53:55.863031  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Ensuring networks are active...
	I0419 20:53:55.863065  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:55.863845  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Ensuring network default is active
	I0419 20:53:55.864238  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Ensuring network mk-kubernetes-upgrade-270819 is active
	I0419 20:53:55.865050  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Getting domain xml...
	I0419 20:53:55.865929  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Creating domain...
	I0419 20:53:57.199872  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Waiting to get IP...
	I0419 20:53:57.200782  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.201290  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.201363  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:57.201273  414172 retry.go:31] will retry after 312.035855ms: waiting for machine to come up
	I0419 20:53:57.515308  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.515946  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.515972  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:57.515881  414172 retry.go:31] will retry after 311.713386ms: waiting for machine to come up
	I0419 20:53:57.829689  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.830228  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:57.830254  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:57.830198  414172 retry.go:31] will retry after 362.429289ms: waiting for machine to come up
	I0419 20:53:58.194081  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:58.194864  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:58.194890  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:58.194808  414172 retry.go:31] will retry after 542.470256ms: waiting for machine to come up
	I0419 20:53:58.738854  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:58.739520  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:58.739552  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:58.739425  414172 retry.go:31] will retry after 740.313647ms: waiting for machine to come up
	I0419 20:53:59.482039  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:53:59.482599  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:53:59.482631  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:53:59.482574  414172 retry.go:31] will retry after 869.587154ms: waiting for machine to come up
	I0419 20:54:00.353642  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:00.354337  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:00.354378  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:00.354221  414172 retry.go:31] will retry after 1.187971832s: waiting for machine to come up
	I0419 20:54:01.544066  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:01.544619  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:01.544668  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:01.544553  414172 retry.go:31] will retry after 1.279790968s: waiting for machine to come up
	I0419 20:54:02.825990  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:02.826410  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:02.826451  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:02.826382  414172 retry.go:31] will retry after 1.383481573s: waiting for machine to come up
	I0419 20:54:04.212108  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:04.212533  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:04.212566  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:04.212498  414172 retry.go:31] will retry after 1.967083507s: waiting for machine to come up
	I0419 20:54:06.181197  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:06.181715  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:06.181747  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:06.181666  414172 retry.go:31] will retry after 2.255184466s: waiting for machine to come up
	I0419 20:54:08.438844  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:08.439533  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:08.439578  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:08.439489  414172 retry.go:31] will retry after 2.860498541s: waiting for machine to come up
	I0419 20:54:11.301949  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:11.302611  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:11.302644  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:11.302531  414172 retry.go:31] will retry after 3.867623998s: waiting for machine to come up
	I0419 20:54:15.175238  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:15.175850  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find current IP address of domain kubernetes-upgrade-270819 in network mk-kubernetes-upgrade-270819
	I0419 20:54:15.175888  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | I0419 20:54:15.175779  414172 retry.go:31] will retry after 4.617573867s: waiting for machine to come up
	I0419 20:54:19.798513  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:19.799021  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has current primary IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:19.799038  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Found IP for machine: 192.168.50.60
	I0419 20:54:19.799060  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Reserving static IP address...
	I0419 20:54:19.799380  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-270819", mac: "52:54:00:0f:4f:ac", ip: "192.168.50.60"} in network mk-kubernetes-upgrade-270819
	I0419 20:54:19.878005  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Getting to WaitForSSH function...
	I0419 20:54:19.878052  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Reserved static IP address: 192.168.50.60
	I0419 20:54:19.878067  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Waiting for SSH to be available...
	I0419 20:54:19.881065  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:19.881515  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:19.881548  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:19.881691  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Using SSH client type: external
	I0419 20:54:19.881720  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa (-rw-------)
	I0419 20:54:19.881762  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:54:19.881780  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | About to run SSH command:
	I0419 20:54:19.881794  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | exit 0
	I0419 20:54:20.008868  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | SSH cmd err, output: <nil>: 
	I0419 20:54:20.009121  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) KVM machine creation complete!
	I0419 20:54:20.009519  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetConfigRaw
	I0419 20:54:20.010141  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:20.010379  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:20.010592  413874 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:54:20.010610  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetState
	I0419 20:54:20.012063  413874 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:54:20.012077  413874 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:54:20.012083  413874 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:54:20.012090  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.014584  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.014993  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.015025  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.015146  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.015365  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.015542  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.015694  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.015878  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:20.016136  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:20.016151  413874 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:54:20.120192  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:54:20.120218  413874 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:54:20.120227  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.123258  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.123718  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.123751  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.123935  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.124173  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.124339  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.124478  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.124713  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:20.124939  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:20.124953  413874 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:54:20.233947  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:54:20.234041  413874 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:54:20.234048  413874 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:54:20.234057  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:54:20.234366  413874 buildroot.go:166] provisioning hostname "kubernetes-upgrade-270819"
	I0419 20:54:20.234398  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:54:20.234599  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.237588  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.237926  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.237967  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.238134  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.238332  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.238499  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.238667  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.238862  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:20.239094  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:20.239110  413874 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-270819 && echo "kubernetes-upgrade-270819" | sudo tee /etc/hostname
	I0419 20:54:20.363741  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-270819
	
	I0419 20:54:20.363785  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.366632  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.367015  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.367051  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.367255  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.367487  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.367663  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.367848  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.368033  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:20.368235  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:20.368252  413874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-270819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-270819/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-270819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:54:20.482740  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:54:20.482776  413874 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:54:20.482825  413874 buildroot.go:174] setting up certificates
	I0419 20:54:20.482840  413874 provision.go:84] configureAuth start
	I0419 20:54:20.482858  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:54:20.483200  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:54:20.486398  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.486812  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.486852  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.486978  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.489348  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.489688  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.489716  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.489887  413874 provision.go:143] copyHostCerts
	I0419 20:54:20.489970  413874 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:54:20.489984  413874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:54:20.490050  413874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:54:20.490160  413874 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:54:20.490171  413874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:54:20.490201  413874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:54:20.490290  413874 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:54:20.490304  413874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:54:20.490334  413874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:54:20.490395  413874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-270819 san=[127.0.0.1 192.168.50.60 kubernetes-upgrade-270819 localhost minikube]
	I0419 20:54:20.649999  413874 provision.go:177] copyRemoteCerts
	I0419 20:54:20.650061  413874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:54:20.650088  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.652817  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.653187  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.653232  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.653412  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.653621  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.653784  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.653914  413874 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:54:20.744392  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:54:20.772666  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0419 20:54:20.800940  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:54:20.826619  413874 provision.go:87] duration metric: took 343.760903ms to configureAuth
	I0419 20:54:20.826652  413874 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:54:20.826875  413874 config.go:182] Loaded profile config "kubernetes-upgrade-270819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0419 20:54:20.826974  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:20.829803  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.830184  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:20.830218  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:20.830339  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:20.830527  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.830737  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:20.830915  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:20.831072  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:20.831295  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:20.831319  413874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:54:21.111863  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:54:21.111895  413874 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:54:21.111907  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetURL
	I0419 20:54:21.113365  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | Using libvirt version 6000000
	I0419 20:54:21.115862  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.116309  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.116339  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.116563  413874 main.go:141] libmachine: Docker is up and running!
	I0419 20:54:21.116582  413874 main.go:141] libmachine: Reticulating splines...
	I0419 20:54:21.116589  413874 client.go:171] duration metric: took 25.676737283s to LocalClient.Create
	I0419 20:54:21.116619  413874 start.go:167] duration metric: took 25.676805564s to libmachine.API.Create "kubernetes-upgrade-270819"
	I0419 20:54:21.116651  413874 start.go:293] postStartSetup for "kubernetes-upgrade-270819" (driver="kvm2")
	I0419 20:54:21.116682  413874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:54:21.116709  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:21.117007  413874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:54:21.117046  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:21.119650  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.120059  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.120092  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.120258  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:21.120484  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:21.120689  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:21.120869  413874 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:54:21.204171  413874 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:54:21.208604  413874 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:54:21.208653  413874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:54:21.208720  413874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:54:21.208834  413874 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:54:21.208925  413874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:54:21.219828  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:54:21.245746  413874 start.go:296] duration metric: took 129.080394ms for postStartSetup
	I0419 20:54:21.245811  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetConfigRaw
	I0419 20:54:21.246525  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:54:21.249489  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.249839  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.249890  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.250213  413874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/config.json ...
	I0419 20:54:21.250442  413874 start.go:128] duration metric: took 25.832436382s to createHost
	I0419 20:54:21.250482  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:21.252872  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.253188  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.253219  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.253352  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:21.253544  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:21.253737  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:21.253922  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:21.254118  413874 main.go:141] libmachine: Using SSH client type: native
	I0419 20:54:21.254296  413874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:54:21.254306  413874 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 20:54:21.361866  413874 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560061.302873214
	
	I0419 20:54:21.361897  413874 fix.go:216] guest clock: 1713560061.302873214
	I0419 20:54:21.361907  413874 fix.go:229] Guest: 2024-04-19 20:54:21.302873214 +0000 UTC Remote: 2024-04-19 20:54:21.250465485 +0000 UTC m=+57.871967717 (delta=52.407729ms)
	I0419 20:54:21.361935  413874 fix.go:200] guest clock delta is within tolerance: 52.407729ms
	I0419 20:54:21.361942  413874 start.go:83] releasing machines lock for "kubernetes-upgrade-270819", held for 25.94413743s
	I0419 20:54:21.361983  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:21.362301  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:54:21.365252  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.365702  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.365735  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.365861  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:21.366571  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:21.366753  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:54:21.366860  413874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:54:21.366903  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:21.367011  413874 ssh_runner.go:195] Run: cat /version.json
	I0419 20:54:21.367040  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:54:21.369805  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.370117  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.370149  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.370169  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.370355  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:21.370546  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:21.370615  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:21.370645  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:21.370717  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:21.370854  413874 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:54:21.370970  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:54:21.371127  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:54:21.371284  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:54:21.371410  413874 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:54:21.450847  413874 ssh_runner.go:195] Run: systemctl --version
	I0419 20:54:21.511307  413874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:54:21.684793  413874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:54:21.691353  413874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:54:21.691457  413874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:54:21.711251  413874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:54:21.711280  413874 start.go:494] detecting cgroup driver to use...
	I0419 20:54:21.711347  413874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:54:21.730747  413874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:54:21.746268  413874 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:54:21.746344  413874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:54:21.765269  413874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:54:21.784778  413874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:54:21.935981  413874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:54:22.098694  413874 docker.go:233] disabling docker service ...
	I0419 20:54:22.098774  413874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:54:22.118619  413874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:54:22.133653  413874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:54:22.280390  413874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:54:22.411387  413874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:54:22.429845  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:54:22.452523  413874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0419 20:54:22.452604  413874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:54:22.465854  413874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:54:22.465974  413874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:54:22.478440  413874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:54:22.490545  413874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:54:22.502744  413874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:54:22.515028  413874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:54:22.526241  413874 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:54:22.526297  413874 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:54:22.544933  413874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:54:22.559503  413874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:54:22.697477  413874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:54:22.844404  413874 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:54:22.844494  413874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:54:22.849914  413874 start.go:562] Will wait 60s for crictl version
	I0419 20:54:22.849987  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:22.855113  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:54:22.911268  413874 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:54:22.911363  413874 ssh_runner.go:195] Run: crio --version
	I0419 20:54:22.949467  413874 ssh_runner.go:195] Run: crio --version
	I0419 20:54:22.991299  413874 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0419 20:54:22.992892  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:54:22.995976  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:22.996363  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:54:11 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:54:22.996398  413874 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:54:22.996740  413874 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0419 20:54:23.003144  413874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:54:23.021749  413874 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:54:23.021923  413874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 20:54:23.021999  413874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:54:23.065199  413874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0419 20:54:23.065279  413874 ssh_runner.go:195] Run: which lz4
	I0419 20:54:23.070181  413874 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0419 20:54:23.074929  413874 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 20:54:23.074973  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0419 20:54:25.045657  413874 crio.go:462] duration metric: took 1.975515481s to copy over tarball
	I0419 20:54:25.045789  413874 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 20:54:27.686285  413874 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.640453245s)
	I0419 20:54:27.686321  413874 crio.go:469] duration metric: took 2.640625384s to extract the tarball
	I0419 20:54:27.686332  413874 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 20:54:27.729302  413874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:54:27.778626  413874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0419 20:54:27.778671  413874 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 20:54:27.778752  413874 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:54:27.778802  413874 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0419 20:54:27.778836  413874 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0419 20:54:27.778848  413874 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0419 20:54:27.778814  413874 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0419 20:54:27.779101  413874 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0419 20:54:27.779044  413874 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0419 20:54:27.779237  413874 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0419 20:54:27.780222  413874 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0419 20:54:27.780225  413874 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0419 20:54:27.780247  413874 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0419 20:54:27.780222  413874 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0419 20:54:27.780288  413874 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:54:27.780311  413874 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0419 20:54:27.780346  413874 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0419 20:54:27.780541  413874 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0419 20:54:28.005700  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0419 20:54:28.011314  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0419 20:54:28.020909  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0419 20:54:28.021665  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0419 20:54:28.030670  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0419 20:54:28.050990  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0419 20:54:28.099418  413874 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0419 20:54:28.099493  413874 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0419 20:54:28.099549  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.099746  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0419 20:54:28.142082  413874 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0419 20:54:28.142141  413874 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0419 20:54:28.142200  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.179202  413874 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0419 20:54:28.179263  413874 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0419 20:54:28.179308  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.204311  413874 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0419 20:54:28.204366  413874 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0419 20:54:28.204415  413874 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0419 20:54:28.204481  413874 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0419 20:54:28.204425  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.204529  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.208785  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0419 20:54:28.208906  413874 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0419 20:54:28.208939  413874 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0419 20:54:28.208966  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.233339  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0419 20:54:28.233373  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0419 20:54:28.233418  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0419 20:54:28.233419  413874 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0419 20:54:28.233443  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0419 20:54:28.233460  413874 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0419 20:54:28.233528  413874 ssh_runner.go:195] Run: which crictl
	I0419 20:54:28.303488  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0419 20:54:28.303599  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0419 20:54:28.381535  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0419 20:54:28.381601  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0419 20:54:28.381630  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0419 20:54:28.381630  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0419 20:54:28.381688  413874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0419 20:54:28.381725  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0419 20:54:28.419302  413874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0419 20:54:28.646337  413874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:54:28.831253  413874 cache_images.go:92] duration metric: took 1.052562089s to LoadCachedImages
	W0419 20:54:28.831349  413874 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0419 20:54:28.831367  413874 kubeadm.go:928] updating node { 192.168.50.60 8443 v1.20.0 crio true true} ...
	I0419 20:54:28.831512  413874 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-270819 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:54:28.831610  413874 ssh_runner.go:195] Run: crio config
	I0419 20:54:28.888184  413874 cni.go:84] Creating CNI manager for ""
	I0419 20:54:28.888210  413874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:54:28.888223  413874 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:54:28.888249  413874 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-270819 NodeName:kubernetes-upgrade-270819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0419 20:54:28.888426  413874 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-270819"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:54:28.888510  413874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0419 20:54:28.902241  413874 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:54:28.902310  413874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:54:28.914619  413874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0419 20:54:28.933641  413874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:54:28.954203  413874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0419 20:54:28.974448  413874 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0419 20:54:28.978667  413874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:54:28.992105  413874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:54:29.129171  413874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:54:29.146150  413874 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819 for IP: 192.168.50.60
	I0419 20:54:29.146175  413874 certs.go:194] generating shared ca certs ...
	I0419 20:54:29.146193  413874 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.146417  413874 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:54:29.146470  413874 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:54:29.146483  413874 certs.go:256] generating profile certs ...
	I0419 20:54:29.146557  413874 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.key
	I0419 20:54:29.146577  413874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.crt with IP's: []
	I0419 20:54:29.373164  413874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.crt ...
	I0419 20:54:29.373213  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.crt: {Name:mk7f3cd500f8c20fa6d802ed694fb413f8b3d835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.373446  413874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.key ...
	I0419 20:54:29.373478  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/client.key: {Name:mk2cc46009ebcdf05cb3cb7d0730251907d876dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.373571  413874 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key.0ab10811
	I0419 20:54:29.373587  413874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt.0ab10811 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.60]
	I0419 20:54:29.667151  413874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt.0ab10811 ...
	I0419 20:54:29.667207  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt.0ab10811: {Name:mk78d4af4453973622d89ff3dbc21180e6a5141c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.667418  413874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key.0ab10811 ...
	I0419 20:54:29.667440  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key.0ab10811: {Name:mk7f76e5851b4d5a3593a1fc7c95397a011bbd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.667547  413874 certs.go:381] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt.0ab10811 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt
	I0419 20:54:29.667675  413874 certs.go:385] copying /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key.0ab10811 -> /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key
	I0419 20:54:29.667760  413874 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.key
	I0419 20:54:29.667788  413874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.crt with IP's: []
	I0419 20:54:29.828577  413874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.crt ...
	I0419 20:54:29.828607  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.crt: {Name:mkbab17adf69b816fa12fd67238828e9fb9140d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.828771  413874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.key ...
	I0419 20:54:29.828785  413874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.key: {Name:mk279c7a353cce147bc547770a125e8893d8405f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:54:29.828961  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:54:29.828999  413874 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:54:29.829009  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:54:29.829030  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:54:29.829053  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:54:29.829070  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:54:29.829110  413874 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:54:29.829703  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:54:29.858528  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:54:29.888523  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:54:29.918627  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:54:29.946660  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0419 20:54:29.981487  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 20:54:30.057425  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:54:30.089988  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 20:54:30.115973  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:54:30.142346  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:54:30.167685  413874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:54:30.193052  413874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:54:30.211986  413874 ssh_runner.go:195] Run: openssl version
	I0419 20:54:30.218544  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:54:30.234148  413874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:54:30.239792  413874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:54:30.239870  413874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:54:30.246373  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:54:30.259950  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:54:30.272484  413874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:54:30.277408  413874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:54:30.277464  413874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:54:30.283999  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:54:30.296467  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:54:30.309498  413874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:54:30.314660  413874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:54:30.314738  413874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:54:30.321461  413874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:54:30.335486  413874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:54:30.340394  413874 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0419 20:54:30.340454  413874 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:54:30.340568  413874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:54:30.340617  413874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:54:30.378624  413874 cri.go:89] found id: ""
	I0419 20:54:30.378712  413874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0419 20:54:30.389875  413874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 20:54:30.401075  413874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 20:54:30.412326  413874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 20:54:30.412349  413874 kubeadm.go:156] found existing configuration files:
	
	I0419 20:54:30.412408  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 20:54:30.424077  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 20:54:30.424189  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 20:54:30.435459  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 20:54:30.446089  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 20:54:30.446161  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 20:54:30.458031  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 20:54:30.468738  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 20:54:30.468813  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 20:54:30.480482  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 20:54:30.491282  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 20:54:30.491356  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 20:54:30.502374  413874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 20:54:30.792975  413874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 20:56:30.033316  413874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0419 20:56:30.033419  413874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0419 20:56:30.035089  413874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0419 20:56:30.035168  413874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 20:56:30.035260  413874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 20:56:30.035407  413874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 20:56:30.035503  413874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 20:56:30.035557  413874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 20:56:30.037490  413874 out.go:204]   - Generating certificates and keys ...
	I0419 20:56:30.037602  413874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 20:56:30.037680  413874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 20:56:30.037745  413874 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0419 20:56:30.037793  413874 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0419 20:56:30.037894  413874 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0419 20:56:30.038011  413874 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0419 20:56:30.038099  413874 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0419 20:56:30.038278  413874 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0419 20:56:30.038381  413874 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0419 20:56:30.038582  413874 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0419 20:56:30.038678  413874 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0419 20:56:30.038768  413874 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0419 20:56:30.038853  413874 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0419 20:56:30.038939  413874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 20:56:30.039006  413874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 20:56:30.039080  413874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 20:56:30.039168  413874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 20:56:30.039249  413874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 20:56:30.039399  413874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 20:56:30.039506  413874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 20:56:30.039563  413874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 20:56:30.039631  413874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 20:56:30.041265  413874 out.go:204]   - Booting up control plane ...
	I0419 20:56:30.041371  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 20:56:30.041465  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 20:56:30.041553  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 20:56:30.041679  413874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 20:56:30.041860  413874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 20:56:30.041933  413874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0419 20:56:30.042040  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:56:30.042283  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:56:30.042377  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:56:30.042578  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:56:30.042678  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:56:30.042952  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:56:30.043064  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:56:30.043338  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:56:30.043445  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:56:30.043707  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:56:30.043718  413874 kubeadm.go:309] 
	I0419 20:56:30.043779  413874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0419 20:56:30.043829  413874 kubeadm.go:309] 		timed out waiting for the condition
	I0419 20:56:30.043846  413874 kubeadm.go:309] 
	I0419 20:56:30.043898  413874 kubeadm.go:309] 	This error is likely caused by:
	I0419 20:56:30.043934  413874 kubeadm.go:309] 		- The kubelet is not running
	I0419 20:56:30.044085  413874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0419 20:56:30.044097  413874 kubeadm.go:309] 
	I0419 20:56:30.044246  413874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0419 20:56:30.044305  413874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0419 20:56:30.044353  413874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0419 20:56:30.044363  413874 kubeadm.go:309] 
	I0419 20:56:30.044531  413874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0419 20:56:30.044651  413874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0419 20:56:30.044664  413874 kubeadm.go:309] 
	I0419 20:56:30.044800  413874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0419 20:56:30.044940  413874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0419 20:56:30.045063  413874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0419 20:56:30.045130  413874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0419 20:56:30.045150  413874 kubeadm.go:309] 
	W0419 20:56:30.045265  413874 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-270819 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0419 20:56:30.045335  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0419 20:56:30.674396  413874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:56:30.694824  413874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 20:56:30.709359  413874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 20:56:30.709391  413874 kubeadm.go:156] found existing configuration files:
	
	I0419 20:56:30.709455  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0419 20:56:30.725973  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 20:56:30.726054  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 20:56:30.741072  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0419 20:56:30.755380  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 20:56:30.755459  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 20:56:30.770579  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0419 20:56:30.784969  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 20:56:30.785060  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 20:56:30.799904  413874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0419 20:56:30.813854  413874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 20:56:30.813942  413874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 20:56:30.828566  413874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 20:56:30.940789  413874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0419 20:56:30.941277  413874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 20:56:31.167508  413874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 20:56:31.167692  413874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 20:56:31.167910  413874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 20:56:31.422529  413874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 20:56:31.424595  413874 out.go:204]   - Generating certificates and keys ...
	I0419 20:56:31.424740  413874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 20:56:31.424837  413874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 20:56:31.424966  413874 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 20:56:31.425066  413874 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0419 20:56:31.425193  413874 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0419 20:56:31.425273  413874 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0419 20:56:31.425372  413874 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0419 20:56:31.425446  413874 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0419 20:56:31.425538  413874 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 20:56:31.425695  413874 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 20:56:31.425760  413874 kubeadm.go:309] [certs] Using the existing "sa" key
	I0419 20:56:31.425839  413874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 20:56:31.565642  413874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 20:56:31.883634  413874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 20:56:32.096365  413874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 20:56:32.585617  413874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 20:56:32.608715  413874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 20:56:32.611001  413874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 20:56:32.611295  413874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 20:56:32.829394  413874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 20:56:32.831177  413874 out.go:204]   - Booting up control plane ...
	I0419 20:56:32.831412  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 20:56:32.840830  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 20:56:32.842807  413874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 20:56:32.843992  413874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 20:56:32.856497  413874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 20:57:12.858073  413874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0419 20:57:12.858192  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:57:12.858437  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:57:17.858613  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:57:17.858829  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:57:27.859303  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:57:27.859508  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:57:47.860384  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:57:47.860608  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:58:27.862171  413874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:58:27.862436  413874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:58:27.862465  413874 kubeadm.go:309] 
	I0419 20:58:27.862537  413874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0419 20:58:27.862622  413874 kubeadm.go:309] 		timed out waiting for the condition
	I0419 20:58:27.862636  413874 kubeadm.go:309] 
	I0419 20:58:27.862680  413874 kubeadm.go:309] 	This error is likely caused by:
	I0419 20:58:27.862727  413874 kubeadm.go:309] 		- The kubelet is not running
	I0419 20:58:27.862869  413874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0419 20:58:27.862881  413874 kubeadm.go:309] 
	I0419 20:58:27.863011  413874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0419 20:58:27.863059  413874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0419 20:58:27.863100  413874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0419 20:58:27.863109  413874 kubeadm.go:309] 
	I0419 20:58:27.863240  413874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0419 20:58:27.863359  413874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0419 20:58:27.863372  413874 kubeadm.go:309] 
	I0419 20:58:27.863552  413874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0419 20:58:27.863701  413874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0419 20:58:27.863839  413874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0419 20:58:27.863963  413874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0419 20:58:27.863978  413874 kubeadm.go:309] 
	I0419 20:58:27.865080  413874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 20:58:27.865179  413874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0419 20:58:27.865264  413874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0419 20:58:27.865330  413874 kubeadm.go:393] duration metric: took 3m57.524879956s to StartCluster
	I0419 20:58:27.865403  413874 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0419 20:58:27.865505  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0419 20:58:27.934491  413874 cri.go:89] found id: ""
	I0419 20:58:27.934533  413874 logs.go:276] 0 containers: []
	W0419 20:58:27.934547  413874 logs.go:278] No container was found matching "kube-apiserver"
	I0419 20:58:27.934557  413874 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0419 20:58:27.934627  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0419 20:58:27.978008  413874 cri.go:89] found id: ""
	I0419 20:58:27.978047  413874 logs.go:276] 0 containers: []
	W0419 20:58:27.978067  413874 logs.go:278] No container was found matching "etcd"
	I0419 20:58:27.978075  413874 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0419 20:58:27.978151  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0419 20:58:28.030257  413874 cri.go:89] found id: ""
	I0419 20:58:28.030289  413874 logs.go:276] 0 containers: []
	W0419 20:58:28.030301  413874 logs.go:278] No container was found matching "coredns"
	I0419 20:58:28.030309  413874 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0419 20:58:28.030385  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0419 20:58:28.073448  413874 cri.go:89] found id: ""
	I0419 20:58:28.073484  413874 logs.go:276] 0 containers: []
	W0419 20:58:28.073496  413874 logs.go:278] No container was found matching "kube-scheduler"
	I0419 20:58:28.073504  413874 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0419 20:58:28.073575  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0419 20:58:28.114940  413874 cri.go:89] found id: ""
	I0419 20:58:28.114977  413874 logs.go:276] 0 containers: []
	W0419 20:58:28.114990  413874 logs.go:278] No container was found matching "kube-proxy"
	I0419 20:58:28.114998  413874 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0419 20:58:28.115066  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0419 20:58:28.153944  413874 cri.go:89] found id: ""
	I0419 20:58:28.153983  413874 logs.go:276] 0 containers: []
	W0419 20:58:28.153995  413874 logs.go:278] No container was found matching "kube-controller-manager"
	I0419 20:58:28.154003  413874 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0419 20:58:28.154068  413874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0419 20:58:28.199808  413874 cri.go:89] found id: ""
	I0419 20:58:28.199856  413874 logs.go:276] 0 containers: []
	W0419 20:58:28.199868  413874 logs.go:278] No container was found matching "kindnet"
	I0419 20:58:28.199883  413874 logs.go:123] Gathering logs for dmesg ...
	I0419 20:58:28.199912  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 20:58:28.215234  413874 logs.go:123] Gathering logs for describe nodes ...
	I0419 20:58:28.215269  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0419 20:58:28.342217  413874 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0419 20:58:28.342240  413874 logs.go:123] Gathering logs for CRI-O ...
	I0419 20:58:28.342258  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0419 20:58:28.477271  413874 logs.go:123] Gathering logs for container status ...
	I0419 20:58:28.566585  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 20:58:28.613388  413874 logs.go:123] Gathering logs for kubelet ...
	I0419 20:58:28.613426  413874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0419 20:58:28.673025  413874 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0419 20:58:28.673089  413874 out.go:239] * 
	* 
	W0419 20:58:28.673166  413874 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0419 20:58:28.673194  413874 out.go:239] * 
	* 
	W0419 20:58:28.674104  413874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 20:58:28.677944  413874 out.go:177] 
	W0419 20:58:28.679638  413874 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0419 20:58:28.679685  413874 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0419 20:58:28.679702  413874 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0419 20:58:28.681443  413874 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-270819
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-270819: (2.658410491s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-270819 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-270819 status --format={{.Host}}: exit status 7 (108.096716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.160485058s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-270819 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (110.050213ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-270819] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-270819
	    minikube start -p kubernetes-upgrade-270819 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2708192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-270819 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-270819 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.668048782s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-19 21:00:16.512942515 +0000 UTC m=+6167.304459587
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-270819 -n kubernetes-upgrade-270819
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-270819 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-270819 logs -n 25: (1.895825091s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo find             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo crio             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-752991                       | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-683947              | running-upgrade-683947    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-725675 ssh cat      | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p old-k8s-version-771336              | old-k8s-version-771336    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --kvm-network=default                  |                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |                |                     |                     |
	|         | --disable-driver-mounts                |                           |         |                |                     |                     |
	|         | --keep-context=false                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:59 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-465658 ssh                | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-465658 -- sudo         | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p no-preload-202684                   | no-preload-202684         | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |                |                     |                     |
	|         |  --container-runtime=crio              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	| start   | -p pause-635451                        | pause-635451              | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 21:00 UTC |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC | 19 Apr 24 21:00 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| delete  | -p pause-635451                        | pause-635451              | jenkins | v1.33.0-beta.0 | 19 Apr 24 21:00 UTC | 19 Apr 24 21:00 UTC |
	| start   | -p embed-certs-689470                  | embed-certs-689470        | jenkins | v1.33.0-beta.0 | 19 Apr 24 21:00 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2            |                           |         |                |                     |                     |
	|         |  --container-runtime=crio              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 21:00:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 21:00:17.143944  421767 out.go:291] Setting OutFile to fd 1 ...
	I0419 21:00:17.144139  421767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 21:00:17.144167  421767 out.go:304] Setting ErrFile to fd 2...
	I0419 21:00:17.144183  421767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 21:00:17.144442  421767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 21:00:17.145145  421767 out.go:298] Setting JSON to false
	I0419 21:00:17.146263  421767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9763,"bootTime":1713550654,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 21:00:17.146342  421767 start.go:139] virtualization: kvm guest
	I0419 21:00:17.148539  421767 out.go:177] * [embed-certs-689470] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 21:00:17.150285  421767 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 21:00:17.150353  421767 notify.go:220] Checking for updates...
	I0419 21:00:17.151732  421767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 21:00:17.153054  421767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 21:00:17.154443  421767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 21:00:17.155791  421767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 21:00:17.157057  421767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.641855922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560417641829965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=823a2113-9dff-48fa-bbbd-31383f256e34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.642509700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dd0f495-cc8c-4cc1-930a-55a9c3f01c7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.642595783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dd0f495-cc8c-4cc1-930a-55a9c3f01c7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.642899545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25ac9774ffb39d605744596d7de01776ace76097de1a9a8558f355b7663d0b9b,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414162578384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a17d58f6482f6aa8153d7ced4c36a4b1ea8c29cfbc51381314dcd70b5cc6eb,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414123271768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1df6d8e8ab0351b50581e0eb22eb3a623eb5db24f9cad9c426b9bec8f533175,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713560414170397197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3909cee3a5a935191d5162092af32c5e74aa25341d4fba619e8512bf750cc0,PodSandboxId:c92dff4c699c6c0a842da0ad8860d47d4a258e44b4ac32463ad7f08420d1ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1713560409512493398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d7f01f8d8ac3c2f327dfef9e8e9102530f908da28d3fe9b6b3dccb40d3127f,PodSandboxId:5fcf18ee3ea2d95e49d3795b2a52aef501eab8e7346e2f5465bb338aefe2c937,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_R
UNNING,CreatedAt:1713560409523087284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a4a37914c5d47062494203b15ff32c3c282eb0f3b725458042b22550a01ba3,PodSandboxId:73414e3b9d54150a51acdf6ce3033c817400497522c159e1dbd26193b8671f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1713560409458165684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dce36329b6e8678b7c6c6b2b1ff6b7c75d0757f540cb2501a0cf05409679d98,PodSandboxId:80223dc071d23e09552934952dd143fe4f5fadda5c79d7f38dfeea48d1595509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:17135
60409475069023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31826f2113183754a3b95aa81d731246255084235022065cc8b9b63155e77ce,PodSandboxId:c511ab14fa43a9a5ba0203628aea69dc995f0e5955e262c6a749cb637bc4fa7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17135604056953
09581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713560404687964393,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391591081171,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391617266378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7f80a1d81c2e356d48f372a0d0e95b9aa3091df4987a21c57b3110a8d973d,PodSandboxId:44c1b4ee92a3bcdf00c098986a07278bf7ad131643aeaa731e1b380151d
3a7ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560387863686751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ed7df17d452e5884d7e9d75ac1b82557f3b217af131431645c8f5689d30d21,PodSandboxId:abac0cd0430ab8fccd21634363f7a1bca4e74a91dec1e9005d2de8a3b3e941b0,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560387781082175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e515b203e5509437cd6e2e326ffea73568d420ed2831389c7105ac9de2b70f3,PodSandboxId:99e02a1ba986236deffde7a8489f406b6c5d858acfb08e9ffa7a7ca25738cfbd,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560387850584992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceec4fd44c789f414ba6dac34a4e79357fa3480ccbf79bafe677322f3a2543a6,PodSandboxId:311bcda1a029c5f6df8697f5bdf7074b61a7d1e153d5c7f4c974d2a5b569bf3b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560387721313101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8060489f13e39ace82ef3773397ba585784b6fba119cc13129c1b08c6f1adb,PodSandboxId:3d56cad95c34ab84a8d3531cb5409917167d77018a2c942786e709cfcacf72b7,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560387603090932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dd0f495-cc8c-4cc1-930a-55a9c3f01c7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.700987473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e16bcc4e-9e61-4a2c-919b-0622e302074f name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.701402717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e16bcc4e-9e61-4a2c-919b-0622e302074f name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.702680060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a257e7f-1f51-4c9e-add8-292e36a748e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.703312598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560417703282031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a257e7f-1f51-4c9e-add8-292e36a748e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.704135700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e01905b-e19b-412b-8912-b232e1bd82fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.704298656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e01905b-e19b-412b-8912-b232e1bd82fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.704727362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25ac9774ffb39d605744596d7de01776ace76097de1a9a8558f355b7663d0b9b,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414162578384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a17d58f6482f6aa8153d7ced4c36a4b1ea8c29cfbc51381314dcd70b5cc6eb,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414123271768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1df6d8e8ab0351b50581e0eb22eb3a623eb5db24f9cad9c426b9bec8f533175,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713560414170397197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3909cee3a5a935191d5162092af32c5e74aa25341d4fba619e8512bf750cc0,PodSandboxId:c92dff4c699c6c0a842da0ad8860d47d4a258e44b4ac32463ad7f08420d1ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1713560409512493398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d7f01f8d8ac3c2f327dfef9e8e9102530f908da28d3fe9b6b3dccb40d3127f,PodSandboxId:5fcf18ee3ea2d95e49d3795b2a52aef501eab8e7346e2f5465bb338aefe2c937,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_R
UNNING,CreatedAt:1713560409523087284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a4a37914c5d47062494203b15ff32c3c282eb0f3b725458042b22550a01ba3,PodSandboxId:73414e3b9d54150a51acdf6ce3033c817400497522c159e1dbd26193b8671f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1713560409458165684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dce36329b6e8678b7c6c6b2b1ff6b7c75d0757f540cb2501a0cf05409679d98,PodSandboxId:80223dc071d23e09552934952dd143fe4f5fadda5c79d7f38dfeea48d1595509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:17135
60409475069023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31826f2113183754a3b95aa81d731246255084235022065cc8b9b63155e77ce,PodSandboxId:c511ab14fa43a9a5ba0203628aea69dc995f0e5955e262c6a749cb637bc4fa7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17135604056953
09581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713560404687964393,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391591081171,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391617266378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7f80a1d81c2e356d48f372a0d0e95b9aa3091df4987a21c57b3110a8d973d,PodSandboxId:44c1b4ee92a3bcdf00c098986a07278bf7ad131643aeaa731e1b380151d
3a7ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560387863686751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ed7df17d452e5884d7e9d75ac1b82557f3b217af131431645c8f5689d30d21,PodSandboxId:abac0cd0430ab8fccd21634363f7a1bca4e74a91dec1e9005d2de8a3b3e941b0,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560387781082175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e515b203e5509437cd6e2e326ffea73568d420ed2831389c7105ac9de2b70f3,PodSandboxId:99e02a1ba986236deffde7a8489f406b6c5d858acfb08e9ffa7a7ca25738cfbd,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560387850584992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceec4fd44c789f414ba6dac34a4e79357fa3480ccbf79bafe677322f3a2543a6,PodSandboxId:311bcda1a029c5f6df8697f5bdf7074b61a7d1e153d5c7f4c974d2a5b569bf3b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560387721313101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8060489f13e39ace82ef3773397ba585784b6fba119cc13129c1b08c6f1adb,PodSandboxId:3d56cad95c34ab84a8d3531cb5409917167d77018a2c942786e709cfcacf72b7,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560387603090932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e01905b-e19b-412b-8912-b232e1bd82fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.761983007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5901833e-001d-4353-a852-0dd9a3da60df name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.762082540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5901833e-001d-4353-a852-0dd9a3da60df name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.763593120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65a4efc4-ecf4-4d20-83f4-f0e41f7adc44 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.764601193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560417764377295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65a4efc4-ecf4-4d20-83f4-f0e41f7adc44 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.765538904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69571ed9-7d03-4f46-9e28-2e58d6640cb1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.765611482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69571ed9-7d03-4f46-9e28-2e58d6640cb1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.766047686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25ac9774ffb39d605744596d7de01776ace76097de1a9a8558f355b7663d0b9b,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414162578384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a17d58f6482f6aa8153d7ced4c36a4b1ea8c29cfbc51381314dcd70b5cc6eb,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414123271768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1df6d8e8ab0351b50581e0eb22eb3a623eb5db24f9cad9c426b9bec8f533175,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713560414170397197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3909cee3a5a935191d5162092af32c5e74aa25341d4fba619e8512bf750cc0,PodSandboxId:c92dff4c699c6c0a842da0ad8860d47d4a258e44b4ac32463ad7f08420d1ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1713560409512493398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d7f01f8d8ac3c2f327dfef9e8e9102530f908da28d3fe9b6b3dccb40d3127f,PodSandboxId:5fcf18ee3ea2d95e49d3795b2a52aef501eab8e7346e2f5465bb338aefe2c937,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_R
UNNING,CreatedAt:1713560409523087284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a4a37914c5d47062494203b15ff32c3c282eb0f3b725458042b22550a01ba3,PodSandboxId:73414e3b9d54150a51acdf6ce3033c817400497522c159e1dbd26193b8671f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1713560409458165684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dce36329b6e8678b7c6c6b2b1ff6b7c75d0757f540cb2501a0cf05409679d98,PodSandboxId:80223dc071d23e09552934952dd143fe4f5fadda5c79d7f38dfeea48d1595509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:17135
60409475069023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31826f2113183754a3b95aa81d731246255084235022065cc8b9b63155e77ce,PodSandboxId:c511ab14fa43a9a5ba0203628aea69dc995f0e5955e262c6a749cb637bc4fa7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17135604056953
09581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713560404687964393,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391591081171,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391617266378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7f80a1d81c2e356d48f372a0d0e95b9aa3091df4987a21c57b3110a8d973d,PodSandboxId:44c1b4ee92a3bcdf00c098986a07278bf7ad131643aeaa731e1b380151d
3a7ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560387863686751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ed7df17d452e5884d7e9d75ac1b82557f3b217af131431645c8f5689d30d21,PodSandboxId:abac0cd0430ab8fccd21634363f7a1bca4e74a91dec1e9005d2de8a3b3e941b0,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560387781082175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e515b203e5509437cd6e2e326ffea73568d420ed2831389c7105ac9de2b70f3,PodSandboxId:99e02a1ba986236deffde7a8489f406b6c5d858acfb08e9ffa7a7ca25738cfbd,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560387850584992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceec4fd44c789f414ba6dac34a4e79357fa3480ccbf79bafe677322f3a2543a6,PodSandboxId:311bcda1a029c5f6df8697f5bdf7074b61a7d1e153d5c7f4c974d2a5b569bf3b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560387721313101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8060489f13e39ace82ef3773397ba585784b6fba119cc13129c1b08c6f1adb,PodSandboxId:3d56cad95c34ab84a8d3531cb5409917167d77018a2c942786e709cfcacf72b7,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560387603090932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69571ed9-7d03-4f46-9e28-2e58d6640cb1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.810553608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3591399e-c5f7-43b4-8a0e-e16b9907723d name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.810652643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3591399e-c5f7-43b4-8a0e-e16b9907723d name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.812706403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fda0cf5-8b3e-4404-a0ec-2c455838340d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.813549248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560417813515252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fda0cf5-8b3e-4404-a0ec-2c455838340d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.814622113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d5099c0-2e79-4c86-add5-1ccf471e4c80 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.814702049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d5099c0-2e79-4c86-add5-1ccf471e4c80 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:17 kubernetes-upgrade-270819 crio[3013]: time="2024-04-19 21:00:17.815020486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25ac9774ffb39d605744596d7de01776ace76097de1a9a8558f355b7663d0b9b,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414162578384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a17d58f6482f6aa8153d7ced4c36a4b1ea8c29cfbc51381314dcd70b5cc6eb,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560414123271768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1df6d8e8ab0351b50581e0eb22eb3a623eb5db24f9cad9c426b9bec8f533175,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713560414170397197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3909cee3a5a935191d5162092af32c5e74aa25341d4fba619e8512bf750cc0,PodSandboxId:c92dff4c699c6c0a842da0ad8860d47d4a258e44b4ac32463ad7f08420d1ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1713560409512493398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d7f01f8d8ac3c2f327dfef9e8e9102530f908da28d3fe9b6b3dccb40d3127f,PodSandboxId:5fcf18ee3ea2d95e49d3795b2a52aef501eab8e7346e2f5465bb338aefe2c937,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_R
UNNING,CreatedAt:1713560409523087284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a4a37914c5d47062494203b15ff32c3c282eb0f3b725458042b22550a01ba3,PodSandboxId:73414e3b9d54150a51acdf6ce3033c817400497522c159e1dbd26193b8671f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1713560409458165684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dce36329b6e8678b7c6c6b2b1ff6b7c75d0757f540cb2501a0cf05409679d98,PodSandboxId:80223dc071d23e09552934952dd143fe4f5fadda5c79d7f38dfeea48d1595509,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:17135
60409475069023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31826f2113183754a3b95aa81d731246255084235022065cc8b9b63155e77ce,PodSandboxId:c511ab14fa43a9a5ba0203628aea69dc995f0e5955e262c6a749cb637bc4fa7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17135604056953
09581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f,PodSandboxId:7c9622e7e5109ad75b4bed06a8d4c04221e18ed4f518f113a6ab9f0935725232,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713560404687964393,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dde02197-6f87-4163-93dd-3e8432fd3f1a,},Annotations:map[string]string{io.kubernetes.container.hash: c2a3e6bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295,PodSandboxId:8432e6de6f5a0ce20788ed5f2c969eb408c5968e51441a9f71bad9d4c5831bf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391591081171,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4zmh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb95cbb-84bb-41f8-adb7-e723b54961dc,},Annotations:map[string]string{io.kubernetes.container.hash: 75840475,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef,PodSandboxId:f914379770df14b6300982964d17c65e82618940b86e5e094b03800fd1ea493c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560391617266378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jrvq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7d3779-1634-4a20-bd8a-41d11d8bf617,},Annotations:map[string]string{io.kubernetes.container.hash: 3b0a8bb5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7f80a1d81c2e356d48f372a0d0e95b9aa3091df4987a21c57b3110a8d973d,PodSandboxId:44c1b4ee92a3bcdf00c098986a07278bf7ad131643aeaa731e1b380151d
3a7ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560387863686751,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bb673cf3d3bb1bdd020a696ba23593c,},Annotations:map[string]string{io.kubernetes.container.hash: 388be5ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ed7df17d452e5884d7e9d75ac1b82557f3b217af131431645c8f5689d30d21,PodSandboxId:abac0cd0430ab8fccd21634363f7a1bca4e74a91dec1e9005d2de8a3b3e941b0,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560387781082175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f33fd730a6f53239c067bb31cd8668f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e515b203e5509437cd6e2e326ffea73568d420ed2831389c7105ac9de2b70f3,PodSandboxId:99e02a1ba986236deffde7a8489f406b6c5d858acfb08e9ffa7a7ca25738cfbd,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560387850584992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd56aec993cd64666359d804036a58a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceec4fd44c789f414ba6dac34a4e79357fa3480ccbf79bafe677322f3a2543a6,PodSandboxId:311bcda1a029c5f6df8697f5bdf7074b61a7d1e153d5c7f4c974d2a5b569bf3b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560387721313101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-270819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da291fab7abaa0fe5d0fff8d32deb701,},Annotations:map[string]string{io.kubernetes.container.hash: 7c8ef0cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8060489f13e39ace82ef3773397ba585784b6fba119cc13129c1b08c6f1adb,PodSandboxId:3d56cad95c34ab84a8d3531cb5409917167d77018a2c942786e709cfcacf72b7,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560387603090932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18efe638-781b-4c68-9bb3-05b4d99f3e01,},Annotations:map[string]string{io.kubernetes.container.hash: 6fee467d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d5099c0-2e79-4c86-add5-1ccf471e4c80 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1df6d8e8ab03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   7c9622e7e5109       storage-provisioner
	25ac9774ffb39       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   f914379770df1       coredns-7db6d8ff4d-jrvq5
	b0a17d58f6482       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   8432e6de6f5a0       coredns-7db6d8ff4d-4zmh4
	16d7f01f8d8ac       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   8 seconds ago       Running             kube-controller-manager   2                   5fcf18ee3ea2d       kube-controller-manager-kubernetes-upgrade-270819
	7e3909cee3a5a       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   8 seconds ago       Running             kube-scheduler            2                   c92dff4c699c6       kube-scheduler-kubernetes-upgrade-270819
	1dce36329b6e8       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   8 seconds ago       Running             kube-apiserver            2                   80223dc071d23       kube-apiserver-kubernetes-upgrade-270819
	31a4a37914c5d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   73414e3b9d541       etcd-kubernetes-upgrade-270819
	a31826f211318       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   12 seconds ago      Running             kube-proxy                2                   c511ab14fa43a       kube-proxy-2z5l6
	ae5388c0ce7ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       2                   7c9622e7e5109       storage-provisioner
	80403c4700534       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   f914379770df1       coredns-7db6d8ff4d-jrvq5
	8628b54c03547       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   8432e6de6f5a0       coredns-7db6d8ff4d-4zmh4
	8ee7f80a1d81c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      1                   44c1b4ee92a3b       etcd-kubernetes-upgrade-270819
	6e515b203e550       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   30 seconds ago      Exited              kube-scheduler            1                   99e02a1ba9862       kube-scheduler-kubernetes-upgrade-270819
	63ed7df17d452       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   30 seconds ago      Exited              kube-controller-manager   1                   abac0cd0430ab       kube-controller-manager-kubernetes-upgrade-270819
	ceec4fd44c789       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   30 seconds ago      Exited              kube-apiserver            1                   311bcda1a029c       kube-apiserver-kubernetes-upgrade-270819
	8c8060489f13e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   30 seconds ago      Exited              kube-proxy                1                   3d56cad95c34a       kube-proxy-2z5l6
	
	
	==> coredns [25ac9774ffb39d605744596d7de01776ace76097de1a9a8558f355b7663d0b9b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b0a17d58f6482f6aa8153d7ced4c36a4b1ea8c29cfbc51381314dcd70b5cc6eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-270819
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-270819
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-270819
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 21:00:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 21:00:13 +0000   Fri, 19 Apr 2024 20:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 21:00:13 +0000   Fri, 19 Apr 2024 20:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 21:00:13 +0000   Fri, 19 Apr 2024 20:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 21:00:13 +0000   Fri, 19 Apr 2024 20:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.60
	  Hostname:    kubernetes-upgrade-270819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcf30ac3e59e4161814bdf96c9538f4a
	  System UUID:                fcf30ac3-e59e-4161-814b-df96c9538f4a
	  Boot ID:                    ccf1fbf2-3160-4794-8c80-fc6dda14e50e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4zmh4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     45s
	  kube-system                 coredns-7db6d8ff4d-jrvq5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     45s
	  kube-system                 etcd-kubernetes-upgrade-270819                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-270819             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-270819    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-proxy-2z5l6                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-scheduler-kubernetes-upgrade-270819             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           46s                node-controller  Node kubernetes-upgrade-270819 event: Registered Node kubernetes-upgrade-270819 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 10s)   kubelet          Node kubernetes-upgrade-270819 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr19 20:59] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.063878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058562] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.176179] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.161386] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.328633] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +4.794230] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.063664] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.083770] systemd-fstab-generator[859]: Ignoring "noauto" option for root device
	[  +9.478704] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.086291] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.088663] kauditd_printk_skb: 18 callbacks suppressed
	[ +18.838958] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.094862] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.162505] systemd-fstab-generator[2265]: Ignoring "noauto" option for root device
	[  +0.569022] systemd-fstab-generator[2511]: Ignoring "noauto" option for root device
	[  +0.467692] systemd-fstab-generator[2711]: Ignoring "noauto" option for root device
	[  +0.637467] systemd-fstab-generator[2902]: Ignoring "noauto" option for root device
	[  +2.730505] systemd-fstab-generator[3724]: Ignoring "noauto" option for root device
	[Apr19 21:00] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.929707] systemd-fstab-generator[4091]: Ignoring "noauto" option for root device
	[  +0.106835] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.252101] kauditd_printk_skb: 44 callbacks suppressed
	[  +1.251184] systemd-fstab-generator[4608]: Ignoring "noauto" option for root device
	
	
	==> etcd [31a4a37914c5d47062494203b15ff32c3c282eb0f3b725458042b22550a01ba3] <==
	{"level":"info","ts":"2024-04-19T21:00:09.901958Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.60:2380"}
	{"level":"info","ts":"2024-04-19T21:00:09.899336Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-19T21:00:09.900664Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T21:00:09.902111Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T21:00:09.90229Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T21:00:09.90162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d switched to configuration voters=(2901212365131411005)"}
	{"level":"info","ts":"2024-04-19T21:00:09.902656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d29730a4e6a94b90","local-member-id":"28432d4659a0ee3d","added-peer-id":"28432d4659a0ee3d","added-peer-peer-urls":["https://192.168.50.60:2380"]}
	{"level":"info","ts":"2024-04-19T21:00:09.903014Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d29730a4e6a94b90","local-member-id":"28432d4659a0ee3d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T21:00:09.903132Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T21:00:09.90641Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28432d4659a0ee3d","initial-advertise-peer-urls":["https://192.168.50.60:2380"],"listen-peer-urls":["https://192.168.50.60:2380"],"advertise-client-urls":["https://192.168.50.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T21:00:09.906528Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T21:00:11.248805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-19T21:00:11.248935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-19T21:00:11.248987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d received MsgPreVoteResp from 28432d4659a0ee3d at term 2"}
	{"level":"info","ts":"2024-04-19T21:00:11.249024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d became candidate at term 3"}
	{"level":"info","ts":"2024-04-19T21:00:11.249048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d received MsgVoteResp from 28432d4659a0ee3d at term 3"}
	{"level":"info","ts":"2024-04-19T21:00:11.249075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d became leader at term 3"}
	{"level":"info","ts":"2024-04-19T21:00:11.249107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28432d4659a0ee3d elected leader 28432d4659a0ee3d at term 3"}
	{"level":"info","ts":"2024-04-19T21:00:11.257254Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"28432d4659a0ee3d","local-member-attributes":"{Name:kubernetes-upgrade-270819 ClientURLs:[https://192.168.50.60:2379]}","request-path":"/0/members/28432d4659a0ee3d/attributes","cluster-id":"d29730a4e6a94b90","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T21:00:11.257418Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T21:00:11.257507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T21:00:11.260873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.60:2379"}
	{"level":"info","ts":"2024-04-19T21:00:11.263415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-19T21:00:11.263663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T21:00:11.263709Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [8ee7f80a1d81c2e356d48f372a0d0e95b9aa3091df4987a21c57b3110a8d973d] <==
	{"level":"info","ts":"2024-04-19T20:59:48.646559Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"82.513209ms"}
	{"level":"info","ts":"2024-04-19T20:59:48.723613Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-19T20:59:48.758545Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d29730a4e6a94b90","local-member-id":"28432d4659a0ee3d","commit-index":398}
	{"level":"info","ts":"2024-04-19T20:59:48.758889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-19T20:59:48.760298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d became follower at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:48.770622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 28432d4659a0ee3d [peers: [], term: 2, commit: 398, applied: 0, lastindex: 398, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-19T20:59:48.784563Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-19T20:59:48.858725Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":387}
	{"level":"info","ts":"2024-04-19T20:59:48.905303Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-19T20:59:48.913113Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"28432d4659a0ee3d","timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:59:48.917429Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"28432d4659a0ee3d"}
	{"level":"info","ts":"2024-04-19T20:59:48.918281Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"28432d4659a0ee3d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-19T20:59:48.924413Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-19T20:59:48.926358Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:48.92727Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:48.92731Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:48.927634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28432d4659a0ee3d switched to configuration voters=(2901212365131411005)"}
	{"level":"info","ts":"2024-04-19T20:59:48.93031Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d29730a4e6a94b90","local-member-id":"28432d4659a0ee3d","added-peer-id":"28432d4659a0ee3d","added-peer-peer-urls":["https://192.168.50.60:2380"]}
	{"level":"info","ts":"2024-04-19T20:59:48.930555Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d29730a4e6a94b90","local-member-id":"28432d4659a0ee3d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:48.932288Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:48.938948Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T20:59:48.947633Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28432d4659a0ee3d","initial-advertise-peer-urls":["https://192.168.50.60:2380"],"listen-peer-urls":["https://192.168.50.60:2380"],"advertise-client-urls":["https://192.168.50.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:59:48.952288Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:59:48.952472Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.60:2380"}
	{"level":"info","ts":"2024-04-19T20:59:48.953253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.60:2380"}
	
	
	==> kernel <==
	 21:00:18 up 1 min,  0 users,  load average: 2.32, 0.69, 0.24
	Linux kubernetes-upgrade-270819 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1dce36329b6e8678b7c6c6b2b1ff6b7c75d0757f540cb2501a0cf05409679d98] <==
	I0419 21:00:12.836930       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0419 21:00:12.934454       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 21:00:12.948771       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 21:00:12.948891       1 policy_source.go:224] refreshing policies
	I0419 21:00:12.979439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 21:00:12.980466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 21:00:12.981758       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 21:00:12.983243       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 21:00:12.983444       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0419 21:00:12.983551       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0419 21:00:12.985690       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0419 21:00:12.986935       1 aggregator.go:165] initial CRD sync complete...
	I0419 21:00:12.986979       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 21:00:12.987003       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 21:00:12.987026       1 cache.go:39] Caches are synced for autoregister controller
	I0419 21:00:12.988749       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0419 21:00:12.995966       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0419 21:00:13.018150       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 21:00:13.783387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 21:00:14.399959       1 controller.go:615] quota admission added evaluator for: endpoints
	I0419 21:00:14.791149       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 21:00:14.812100       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 21:00:14.872575       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 21:00:14.966521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 21:00:14.979565       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ceec4fd44c789f414ba6dac34a4e79357fa3480ccbf79bafe677322f3a2543a6] <==
	I0419 20:59:48.402292       1 options.go:221] external host was not specified, using 192.168.50.60
	I0419 20:59:48.403421       1 server.go:148] Version: v1.30.0
	I0419 20:59:48.403449       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0419 20:59:49.380551       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0419 20:59:49.380890       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0419 20:59:49.381745       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0419 20:59:49.383983       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:59:49.385659       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0419 20:59:49.385795       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0419 20:59:49.386024       1 instance.go:299] Using reconciler: lease
	W0419 20:59:49.387263       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [16d7f01f8d8ac3c2f327dfef9e8e9102530f908da28d3fe9b6b3dccb40d3127f] <==
	I0419 21:00:14.952618       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0419 21:00:14.952699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0419 21:00:14.952745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0419 21:00:14.952815       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0419 21:00:14.952874       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0419 21:00:14.952973       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0419 21:00:14.953144       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0419 21:00:14.953252       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0419 21:00:14.953315       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0419 21:00:14.953363       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0419 21:00:14.963729       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0419 21:00:14.963993       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0419 21:00:14.964034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0419 21:00:14.976327       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0419 21:00:14.976563       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0419 21:00:14.976624       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0419 21:00:14.976636       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0419 21:00:14.988428       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0419 21:00:14.988683       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0419 21:00:14.988769       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0419 21:00:14.992481       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0419 21:00:14.992722       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0419 21:00:14.992769       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0419 21:00:14.992781       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0419 21:00:15.002624       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-controller-manager [63ed7df17d452e5884d7e9d75ac1b82557f3b217af131431645c8f5689d30d21] <==
	
	
	==> kube-proxy [8c8060489f13e39ace82ef3773397ba585784b6fba119cc13129c1b08c6f1adb] <==
	
	
	==> kube-proxy [a31826f2113183754a3b95aa81d731246255084235022065cc8b9b63155e77ce] <==
	I0419 21:00:05.819752       1 server_linux.go:69] "Using iptables proxy"
	E0419 21:00:05.822921       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-270819\": dial tcp 192.168.50.60:8443: connect: connection refused"
	E0419 21:00:06.959738       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-270819\": dial tcp 192.168.50.60:8443: connect: connection refused"
	E0419 21:00:09.318944       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-270819\": dial tcp 192.168.50.60:8443: connect: connection refused"
	I0419 21:00:13.679326       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.60"]
	I0419 21:00:13.721774       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 21:00:13.721968       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 21:00:13.722009       1 server_linux.go:165] "Using iptables Proxier"
	I0419 21:00:13.724748       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 21:00:13.725054       1 server.go:872] "Version info" version="v1.30.0"
	I0419 21:00:13.725383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 21:00:13.727125       1 config.go:192] "Starting service config controller"
	I0419 21:00:13.727268       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 21:00:13.727334       1 config.go:101] "Starting endpoint slice config controller"
	I0419 21:00:13.727358       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 21:00:13.727868       1 config.go:319] "Starting node config controller"
	I0419 21:00:13.729095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 21:00:13.828167       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 21:00:13.828278       1 shared_informer.go:320] Caches are synced for service config
	I0419 21:00:13.829922       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6e515b203e5509437cd6e2e326ffea73568d420ed2831389c7105ac9de2b70f3] <==
	
	
	==> kube-scheduler [7e3909cee3a5a935191d5162092af32c5e74aa25341d4fba619e8512bf750cc0] <==
	I0419 21:00:10.948968       1 serving.go:380] Generated self-signed cert in-memory
	W0419 21:00:12.905824       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0419 21:00:12.905936       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0419 21:00:12.906007       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0419 21:00:12.906036       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0419 21:00:12.956972       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 21:00:12.957099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 21:00:12.965232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 21:00:12.965273       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 21:00:12.965883       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 21:00:12.965959       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 21:00:13.065936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 19 21:00:09 kubernetes-upgrade-270819 kubelet[4098]: W0419 21:00:09.646994    4098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.60:8443: connect: connection refused
	Apr 19 21:00:09 kubernetes-upgrade-270819 kubelet[4098]: E0419 21:00:09.647055    4098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.60:8443: connect: connection refused
	Apr 19 21:00:09 kubernetes-upgrade-270819 kubelet[4098]: W0419 21:00:09.770738    4098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.60:8443: connect: connection refused
	Apr 19 21:00:09 kubernetes-upgrade-270819 kubelet[4098]: E0419 21:00:09.770794    4098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.60:8443: connect: connection refused
	Apr 19 21:00:10 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:10.308833    4098 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.050804    4098 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.051769    4098 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.054455    4098 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.055926    4098 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: E0419 21:00:13.215104    4098 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-270819\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: E0419 21:00:13.666424    4098 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-270819\" already exists" pod="kube-system/etcd-kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: E0419 21:00:13.754704    4098 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-270819\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-270819"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.785131    4098 apiserver.go:52] "Watching apiserver"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.799930    4098 topology_manager.go:215] "Topology Admit Handler" podUID="dde02197-6f87-4163-93dd-3e8432fd3f1a" podNamespace="kube-system" podName="storage-provisioner"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.800289    4098 topology_manager.go:215] "Topology Admit Handler" podUID="4cb95cbb-84bb-41f8-adb7-e723b54961dc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4zmh4"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.800408    4098 topology_manager.go:215] "Topology Admit Handler" podUID="9c7d3779-1634-4a20-bd8a-41d11d8bf617" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jrvq5"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.800579    4098 topology_manager.go:215] "Topology Admit Handler" podUID="18efe638-781b-4c68-9bb3-05b4d99f3e01" podNamespace="kube-system" podName="kube-proxy-2z5l6"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.891409    4098 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.910270    4098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dde02197-6f87-4163-93dd-3e8432fd3f1a-tmp\") pod \"storage-provisioner\" (UID: \"dde02197-6f87-4163-93dd-3e8432fd3f1a\") " pod="kube-system/storage-provisioner"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.910944    4098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18efe638-781b-4c68-9bb3-05b4d99f3e01-xtables-lock\") pod \"kube-proxy-2z5l6\" (UID: \"18efe638-781b-4c68-9bb3-05b4d99f3e01\") " pod="kube-system/kube-proxy-2z5l6"
	Apr 19 21:00:13 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:13.911425    4098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18efe638-781b-4c68-9bb3-05b4d99f3e01-lib-modules\") pod \"kube-proxy-2z5l6\" (UID: \"18efe638-781b-4c68-9bb3-05b4d99f3e01\") " pod="kube-system/kube-proxy-2z5l6"
	Apr 19 21:00:14 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:14.101227    4098 scope.go:117] "RemoveContainer" containerID="8628b54c035472bdc1e677b621abf6b26bb46f75329bb4b4a2950719dc14b295"
	Apr 19 21:00:14 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:14.101947    4098 scope.go:117] "RemoveContainer" containerID="80403c4700534ed5d36467cf28c2d44e45fc9ec6635d5d27ed4a429fc8f25aef"
	Apr 19 21:00:14 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:14.102374    4098 scope.go:117] "RemoveContainer" containerID="ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f"
	Apr 19 21:00:17 kubernetes-upgrade-270819 kubelet[4098]: I0419 21:00:17.294058    4098 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [ae5388c0ce7ae548385578f730689a787334199085398e853939e33909587e3f] <==
	I0419 21:00:04.769294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0419 21:00:04.771036       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c1df6d8e8ab0351b50581e0eb22eb3a623eb5db24f9cad9c426b9bec8f533175] <==
	I0419 21:00:14.363244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0419 21:00:14.382130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0419 21:00:14.382341       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0419 21:00:14.430579       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0419 21:00:14.434929       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-270819_bf6bffc4-1a68-430d-a031-c0bf1ba37d53!
	I0419 21:00:14.433390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08ee0f1e-5727-4cfb-9284-4d6f188245c3", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-270819_bf6bffc4-1a68-430d-a031-c0bf1ba37d53 became leader
	I0419 21:00:14.539878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-270819_bf6bffc4-1a68-430d-a031-c0bf1ba37d53!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-270819 -n kubernetes-upgrade-270819
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-270819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-270819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-270819
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-270819: (1.191840051s)
--- FAIL: TestKubernetesUpgrade (417.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (93.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-635451 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-635451 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.740574438s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-635451] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-635451" primary control-plane node in "pause-635451" cluster
	* Updating the running kvm2 "pause-635451" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-635451" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:58:42.501780  420629 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:58:42.502219  420629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:58:42.502309  420629 out.go:304] Setting ErrFile to fd 2...
	I0419 20:58:42.502337  420629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:58:42.503150  420629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:58:42.504107  420629 out.go:298] Setting JSON to false
	I0419 20:58:42.505942  420629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9669,"bootTime":1713550654,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:58:42.506013  420629 start.go:139] virtualization: kvm guest
	I0419 20:58:42.508189  420629 out.go:177] * [pause-635451] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:58:42.510194  420629 notify.go:220] Checking for updates...
	I0419 20:58:42.510402  420629 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:58:42.512066  420629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:58:42.513538  420629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:58:42.514989  420629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:58:42.516475  420629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:58:42.517903  420629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:58:42.519789  420629 config.go:182] Loaded profile config "pause-635451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:58:42.520289  420629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:58:42.520330  420629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:58:42.539114  420629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0419 20:58:42.539672  420629 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:58:42.540261  420629 main.go:141] libmachine: Using API Version  1
	I0419 20:58:42.540289  420629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:58:42.540710  420629 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:58:42.540952  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:58:42.541219  420629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:58:42.541550  420629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:58:42.541624  420629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:58:42.557612  420629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I0419 20:58:42.558128  420629 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:58:42.558691  420629 main.go:141] libmachine: Using API Version  1
	I0419 20:58:42.558722  420629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:58:42.559069  420629 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:58:42.559273  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:58:42.595049  420629 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:58:42.596517  420629 start.go:297] selected driver: kvm2
	I0419 20:58:42.596540  420629 start.go:901] validating driver "kvm2" against &{Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:58:42.596759  420629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:58:42.597248  420629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:58:42.597377  420629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:58:42.625414  420629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:58:42.626494  420629 cni.go:84] Creating CNI manager for ""
	I0419 20:58:42.626522  420629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:58:42.626622  420629 start.go:340] cluster config:
	{Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-635451 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:58:42.626844  420629 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:58:42.630028  420629 out.go:177] * Starting "pause-635451" primary control-plane node in "pause-635451" cluster
	I0419 20:58:42.631630  420629 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:58:42.631679  420629 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:58:42.631690  420629 cache.go:56] Caching tarball of preloaded images
	I0419 20:58:42.631781  420629 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:58:42.631797  420629 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:58:42.631970  420629 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/config.json ...
	I0419 20:58:42.632223  420629 start.go:360] acquireMachinesLock for pause-635451: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:59:30.681834  420629 start.go:364] duration metric: took 48.049529377s to acquireMachinesLock for "pause-635451"
	I0419 20:59:30.681899  420629 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:59:30.681911  420629 fix.go:54] fixHost starting: 
	I0419 20:59:30.682416  420629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:30.682476  420629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:30.700423  420629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0419 20:59:30.700901  420629 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:30.701510  420629 main.go:141] libmachine: Using API Version  1
	I0419 20:59:30.701541  420629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:30.701940  420629 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:30.702179  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.702341  420629 main.go:141] libmachine: (pause-635451) Calling .GetState
	I0419 20:59:30.703893  420629 fix.go:112] recreateIfNeeded on pause-635451: state=Running err=<nil>
	W0419 20:59:30.703916  420629 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:59:30.705772  420629 out.go:177] * Updating the running kvm2 "pause-635451" VM ...
	I0419 20:59:30.707369  420629 machine.go:94] provisionDockerMachine start ...
	I0419 20:59:30.707397  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.707617  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.710554  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.710959  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.710987  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.711081  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.711269  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711460  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711602  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.711826  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.712100  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.712118  420629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:59:30.834113  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.834151  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834467  420629 buildroot.go:166] provisioning hostname "pause-635451"
	I0419 20:59:30.834501  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834722  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.837964  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838419  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.838469  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838687  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.838889  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839143  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839303  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.839515  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.839734  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.839750  420629 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-635451 && echo "pause-635451" | sudo tee /etc/hostname
	I0419 20:59:30.973130  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.973172  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.976399  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.976936  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.976970  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.977245  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.977516  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977696  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977895  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.978164  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.978377  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.978404  420629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-635451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-635451/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-635451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:31.095515  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:31.095549  420629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:31.095576  420629 buildroot.go:174] setting up certificates
	I0419 20:59:31.095590  420629 provision.go:84] configureAuth start
	I0419 20:59:31.095604  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:31.095912  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:31.098791  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099199  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.099232  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099354  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.101727  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102098  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.102137  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102268  420629 provision.go:143] copyHostCerts
	I0419 20:59:31.102326  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:31.102336  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:31.102385  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:31.102481  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:31.102490  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:31.102509  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:31.102569  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:31.102580  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:31.102596  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:31.102652  420629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.pause-635451 san=[127.0.0.1 192.168.39.194 localhost minikube pause-635451]
	I0419 20:59:31.284651  420629 provision.go:177] copyRemoteCerts
	I0419 20:59:31.284720  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:31.284747  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.287681  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288175  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.288289  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288508  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.288743  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.288920  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.289105  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:31.379802  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:31.414776  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0419 20:59:31.453224  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:59:31.485145  420629 provision.go:87] duration metric: took 389.537969ms to configureAuth
	I0419 20:59:31.485184  420629 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:31.485494  420629 config.go:182] Loaded profile config "pause-635451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:31.485605  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.488458  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.488839  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.488876  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.489023  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.489227  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489434  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489595  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.489836  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:31.490045  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:31.490061  420629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:37.569036  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:59:37.569065  420629 machine.go:97] duration metric: took 6.861676347s to provisionDockerMachine
	I0419 20:59:37.569080  420629 start.go:293] postStartSetup for "pause-635451" (driver="kvm2")
	I0419 20:59:37.569094  420629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:59:37.569116  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.569460  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:59:37.569494  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.572897  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573277  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.573315  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573533  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.573764  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.573958  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.574113  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.660176  420629 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:59:37.666228  420629 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:59:37.666266  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:59:37.666349  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:59:37.666478  420629 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:59:37.666642  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:59:37.676576  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:37.708706  420629 start.go:296] duration metric: took 139.607002ms for postStartSetup
	I0419 20:59:37.708760  420629 fix.go:56] duration metric: took 7.02684919s for fixHost
	I0419 20:59:37.708788  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.712071  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712499  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.712529  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712815  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.713047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713204  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713363  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.713652  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:37.713867  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:37.713880  420629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 20:59:37.834856  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560377.829171571
	
	I0419 20:59:37.834879  420629 fix.go:216] guest clock: 1713560377.829171571
	I0419 20:59:37.834901  420629 fix.go:229] Guest: 2024-04-19 20:59:37.829171571 +0000 UTC Remote: 2024-04-19 20:59:37.708765693 +0000 UTC m=+55.281230832 (delta=120.405878ms)
	I0419 20:59:37.834942  420629 fix.go:200] guest clock delta is within tolerance: 120.405878ms
	I0419 20:59:37.834949  420629 start.go:83] releasing machines lock for "pause-635451", held for 7.153078334s
	I0419 20:59:37.834980  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.835295  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:37.838347  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.838813  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.838856  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.839035  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839677  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839881  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839997  420629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:59:37.840047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.840159  420629 ssh_runner.go:195] Run: cat /version.json
	I0419 20:59:37.840190  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.842981  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843313  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843412  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843432  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843585  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843744  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843770  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.843816  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843892  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843961  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844030  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.844258  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844253  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.844411  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.968233  420629 ssh_runner.go:195] Run: systemctl --version
	I0419 20:59:37.976324  420629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:59:38.150175  420629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:59:38.158300  420629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:59:38.158367  420629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:59:38.169280  420629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:59:38.169311  420629 start.go:494] detecting cgroup driver to use...
	I0419 20:59:38.169396  420629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:59:38.187789  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:59:38.207289  420629 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:59:38.207348  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:59:38.224653  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:59:38.240107  420629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:59:38.440705  420629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:59:38.706842  420629 docker.go:233] disabling docker service ...
	I0419 20:59:38.706930  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:59:38.861517  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:59:38.956442  420629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:59:39.280078  420629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:59:39.664803  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:59:39.718459  420629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:59:39.788624  420629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:59:39.788711  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.812125  420629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:59:39.812221  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.827569  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.846551  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.864973  420629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:59:39.893015  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.911102  420629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.929726  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.944963  420629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:59:39.961238  420629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:59:39.976051  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:40.144723  420629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:59:40.736956  420629 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:59:40.737087  420629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:59:40.744406  420629 start.go:562] Will wait 60s for crictl version
	I0419 20:59:40.744476  420629 ssh_runner.go:195] Run: which crictl
	I0419 20:59:40.749493  420629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:59:40.808992  420629 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:59:40.809069  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:40.845299  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:41.074474  420629 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:59:41.075946  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:41.078826  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079218  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:41.079239  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079530  420629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:59:41.084700  420629 kubeadm.go:877] updating cluster {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:59:41.084849  420629 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:41.084908  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.132194  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.132218  420629 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:59:41.132265  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.168982  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.169013  420629 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:59:41.169022  420629 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0419 20:59:41.169132  420629 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-635451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:59:41.169195  420629 ssh_runner.go:195] Run: crio config
	I0419 20:59:41.224802  420629 cni.go:84] Creating CNI manager for ""
	I0419 20:59:41.224834  420629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:59:41.224861  420629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:59:41.224889  420629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-635451 NodeName:pause-635451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:59:41.225088  420629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-635451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:59:41.225164  420629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:59:41.236438  420629 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:59:41.236515  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:59:41.248119  420629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0419 20:59:41.266196  420629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:59:41.283660  420629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0419 20:59:41.301075  420629 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0419 20:59:41.305465  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:41.441651  420629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:59:41.459496  420629 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451 for IP: 192.168.39.194
	I0419 20:59:41.459527  420629 certs.go:194] generating shared ca certs ...
	I0419 20:59:41.459557  420629 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:59:41.459737  420629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:59:41.459797  420629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:59:41.459811  420629 certs.go:256] generating profile certs ...
	I0419 20:59:41.459920  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/client.key
	I0419 20:59:41.459999  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key.3d8dbd07
	I0419 20:59:41.460048  420629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key
	I0419 20:59:41.460206  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:59:41.460248  420629 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:59:41.460262  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:59:41.460296  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:59:41.460323  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:59:41.460413  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:59:41.460486  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:41.461374  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:59:41.486107  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:59:41.511022  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:59:41.536821  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:59:41.562123  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 20:59:41.660383  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:59:41.835053  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:59:41.975418  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:59:42.047847  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:59:42.117661  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:59:42.151760  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:59:42.202327  420629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:59:42.222270  420629 ssh_runner.go:195] Run: openssl version
	I0419 20:59:42.228883  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:59:42.239871  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246035  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246103  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.260668  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:59:42.288295  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:59:42.303259  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310347  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310421  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.317695  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:59:42.334472  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:59:42.352858  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366362  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366436  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.375115  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:59:42.397057  420629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:59:42.402305  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:59:42.408904  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:59:42.415914  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:59:42.422919  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:59:42.429906  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:59:42.438783  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:59:42.445427  420629 kubeadm.go:391] StartCluster: {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:42.445588  420629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:59:42.445650  420629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:59:42.485993  420629 cri.go:89] found id: "05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc"
	I0419 20:59:42.486023  420629 cri.go:89] found id: "3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f"
	I0419 20:59:42.486029  420629 cri.go:89] found id: "3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688"
	I0419 20:59:42.486034  420629 cri.go:89] found id: "966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c"
	I0419 20:59:42.486040  420629 cri.go:89] found id: "69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a"
	I0419 20:59:42.486044  420629 cri.go:89] found id: "4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2"
	I0419 20:59:42.486048  420629 cri.go:89] found id: "2b06ac09466c5939151d2c0f4169e8cb738cd1dd809ca6319d9e102e97a5c12a"
	I0419 20:59:42.486052  420629 cri.go:89] found id: ""
	I0419 20:59:42.486105  420629 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-635451 -n pause-635451
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-635451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-635451 logs -n 25: (1.581092892s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl status containerd            |                           |         |                |                     |                     |
	|         | --all --full --no-pager                |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl cat containerd               |                           |         |                |                     |                     |
	|         | --no-pager                             |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo find             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo crio             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-752991                       | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-683947              | running-upgrade-683947    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-725675 ssh cat      | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p old-k8s-version-771336              | old-k8s-version-771336    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --kvm-network=default                  |                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |                |                     |                     |
	|         | --disable-driver-mounts                |                           |         |                |                     |                     |
	|         | --keep-context=false                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:59 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-465658 ssh                | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-465658 -- sudo         | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p no-preload-202684                   | no-preload-202684         | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |                |                     |                     |
	|         |  --container-runtime=crio              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	| start   | -p pause-635451                        | pause-635451              | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 21:00 UTC |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:59:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:59:23.900280  421093 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:59:23.900418  421093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:59:23.900426  421093 out.go:304] Setting ErrFile to fd 2...
	I0419 20:59:23.900431  421093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:59:23.900607  421093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:59:23.901211  421093 out.go:298] Setting JSON to false
	I0419 20:59:23.902184  421093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9710,"bootTime":1713550654,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:59:23.902256  421093 start.go:139] virtualization: kvm guest
	I0419 20:59:23.904456  421093 out.go:177] * [kubernetes-upgrade-270819] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:59:23.906010  421093 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:59:23.905955  421093 notify.go:220] Checking for updates...
	I0419 20:59:23.907592  421093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:59:23.909100  421093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:59:23.910655  421093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:59:23.912296  421093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:59:23.913912  421093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:59:23.915900  421093 config.go:182] Loaded profile config "kubernetes-upgrade-270819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:23.916501  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:23.916563  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:23.933132  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0419 20:59:23.933601  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:23.934231  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:23.934249  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:23.934615  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:23.934859  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:23.935156  421093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:59:23.935452  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:23.935487  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:23.951448  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0419 20:59:23.952012  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:23.952513  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:23.952540  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:23.952908  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:23.953096  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:23.991318  421093 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:59:23.992704  421093 start.go:297] selected driver: kvm2
	I0419 20:59:23.992725  421093 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:23.992853  421093 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:59:23.993620  421093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:59:23.993708  421093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:59:24.009863  421093 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:59:24.010304  421093 cni.go:84] Creating CNI manager for ""
	I0419 20:59:24.010323  421093 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:59:24.010379  421093 start.go:340] cluster config:
	{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-270819 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:24.010511  421093 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:59:24.012141  421093 out.go:177] * Starting "kubernetes-upgrade-270819" primary control-plane node in "kubernetes-upgrade-270819" cluster
	I0419 20:59:21.100442  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:21.100940  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find current IP address of domain no-preload-202684 in network mk-no-preload-202684
	I0419 20:59:21.100972  420597 main.go:141] libmachine: (no-preload-202684) DBG | I0419 20:59:21.100883  420840 retry.go:31] will retry after 2.825571039s: waiting for machine to come up
	I0419 20:59:23.930514  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:23.931145  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find current IP address of domain no-preload-202684 in network mk-no-preload-202684
	I0419 20:59:23.931175  420597 main.go:141] libmachine: (no-preload-202684) DBG | I0419 20:59:23.931088  420840 retry.go:31] will retry after 5.224017665s: waiting for machine to come up
	I0419 20:59:24.013490  421093 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:24.013534  421093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:59:24.013556  421093 cache.go:56] Caching tarball of preloaded images
	I0419 20:59:24.013654  421093 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:59:24.013668  421093 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:59:24.013778  421093 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/config.json ...
	I0419 20:59:24.014007  421093 start.go:360] acquireMachinesLock for kubernetes-upgrade-270819: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:59:30.681834  420629 start.go:364] duration metric: took 48.049529377s to acquireMachinesLock for "pause-635451"
	I0419 20:59:30.681899  420629 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:59:30.681911  420629 fix.go:54] fixHost starting: 
	I0419 20:59:30.682416  420629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:30.682476  420629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:30.700423  420629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0419 20:59:30.700901  420629 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:30.701510  420629 main.go:141] libmachine: Using API Version  1
	I0419 20:59:30.701541  420629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:30.701940  420629 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:30.702179  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.702341  420629 main.go:141] libmachine: (pause-635451) Calling .GetState
	I0419 20:59:30.703893  420629 fix.go:112] recreateIfNeeded on pause-635451: state=Running err=<nil>
	W0419 20:59:30.703916  420629 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:59:30.705772  420629 out.go:177] * Updating the running kvm2 "pause-635451" VM ...
	I0419 20:59:29.158216  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.158762  420597 main.go:141] libmachine: (no-preload-202684) Found IP for machine: 192.168.61.149
	I0419 20:59:29.158793  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has current primary IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.158803  420597 main.go:141] libmachine: (no-preload-202684) Reserving static IP address...
	I0419 20:59:29.159265  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find host DHCP lease matching {name: "no-preload-202684", mac: "52:54:00:e0:be:7b", ip: "192.168.61.149"} in network mk-no-preload-202684
	I0419 20:59:29.237448  420597 main.go:141] libmachine: (no-preload-202684) DBG | Getting to WaitForSSH function...
	I0419 20:59:29.237493  420597 main.go:141] libmachine: (no-preload-202684) Reserved static IP address: 192.168.61.149
	I0419 20:59:29.237527  420597 main.go:141] libmachine: (no-preload-202684) Waiting for SSH to be available...
	I0419 20:59:29.240102  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.240460  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.240498  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.240588  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using SSH client type: external
	I0419 20:59:29.240613  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa (-rw-------)
	I0419 20:59:29.240666  420597 main.go:141] libmachine: (no-preload-202684) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:59:29.240685  420597 main.go:141] libmachine: (no-preload-202684) DBG | About to run SSH command:
	I0419 20:59:29.240715  420597 main.go:141] libmachine: (no-preload-202684) DBG | exit 0
	I0419 20:59:29.369280  420597 main.go:141] libmachine: (no-preload-202684) DBG | SSH cmd err, output: <nil>: 
	I0419 20:59:29.369542  420597 main.go:141] libmachine: (no-preload-202684) KVM machine creation complete!
	I0419 20:59:29.369955  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetConfigRaw
	I0419 20:59:29.370566  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:29.370799  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:29.371057  420597 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:59:29.371078  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetState
	I0419 20:59:29.372511  420597 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:59:29.372547  420597 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:59:29.372554  420597 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:59:29.372563  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.374911  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.375306  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.375333  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.375487  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.375681  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.375846  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.376010  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.376213  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.376404  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.376414  420597 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:59:29.483978  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:29.484005  420597 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:59:29.484013  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.486852  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.487256  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.487282  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.487414  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.487635  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.487792  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.487975  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.488183  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.488353  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.488364  420597 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:59:29.597614  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:59:29.597736  420597 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:59:29.597751  420597 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:59:29.597762  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.598068  420597 buildroot.go:166] provisioning hostname "no-preload-202684"
	I0419 20:59:29.598115  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.598327  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.601018  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.601410  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.601442  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.601616  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.601817  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.601950  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.602073  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.602242  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.602430  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.602443  420597 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-202684 && echo "no-preload-202684" | sudo tee /etc/hostname
	I0419 20:59:29.728251  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-202684
	
	I0419 20:59:29.728285  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.731486  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.731930  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.731981  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.732182  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.732393  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.732598  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.732807  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.733019  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.733223  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.733247  420597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-202684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-202684/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-202684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:29.850475  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:29.850507  420597 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:29.850527  420597 buildroot.go:174] setting up certificates
	I0419 20:59:29.850540  420597 provision.go:84] configureAuth start
	I0419 20:59:29.850550  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.850862  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:29.853793  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.854161  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.854196  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.854340  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.856610  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.857017  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.857066  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.857235  420597 provision.go:143] copyHostCerts
	I0419 20:59:29.857300  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:29.857316  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:29.857396  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:29.857521  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:29.857533  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:29.857556  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:29.857632  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:29.857641  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:29.857660  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:29.857705  420597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.no-preload-202684 san=[127.0.0.1 192.168.61.149 localhost minikube no-preload-202684]
	I0419 20:59:29.946744  420597 provision.go:177] copyRemoteCerts
	I0419 20:59:29.946817  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:29.946855  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.949799  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.950143  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.950188  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.950379  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.950576  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.950733  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.950882  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.036140  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:30.065082  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0419 20:59:30.093417  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:59:30.121282  420597 provision.go:87] duration metric: took 270.729819ms to configureAuth
	I0419 20:59:30.121311  420597 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:30.121517  420597 config.go:182] Loaded profile config "no-preload-202684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:30.121637  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.124371  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.124684  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.124713  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.124900  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.125117  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.125320  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.125483  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.125664  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.125857  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:30.125877  420597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:30.427084  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:59:30.427132  420597 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:59:30.427144  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetURL
	I0419 20:59:30.428546  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using libvirt version 6000000
	I0419 20:59:30.431313  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.431677  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.431716  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.431861  420597 main.go:141] libmachine: Docker is up and running!
	I0419 20:59:30.431881  420597 main.go:141] libmachine: Reticulating splines...
	I0419 20:59:30.431889  420597 client.go:171] duration metric: took 25.148659401s to LocalClient.Create
	I0419 20:59:30.431929  420597 start.go:167] duration metric: took 25.148752115s to libmachine.API.Create "no-preload-202684"
	I0419 20:59:30.431950  420597 start.go:293] postStartSetup for "no-preload-202684" (driver="kvm2")
	I0419 20:59:30.431966  420597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:59:30.431991  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.432282  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:59:30.432317  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.434734  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.435131  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.435160  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.435320  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.435552  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.435676  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.435848  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.519801  420597 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:59:30.525265  420597 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:59:30.525296  420597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:59:30.525369  420597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:59:30.525456  420597 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:59:30.525586  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:59:30.536486  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:30.563656  420597 start.go:296] duration metric: took 131.688164ms for postStartSetup
	I0419 20:59:30.563721  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetConfigRaw
	I0419 20:59:30.564460  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:30.567288  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.567635  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.567666  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.567986  420597 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/no-preload-202684/config.json ...
	I0419 20:59:30.568231  420597 start.go:128] duration metric: took 25.310088259s to createHost
	I0419 20:59:30.568266  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.570764  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.571139  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.571164  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.571280  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.571452  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.571627  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.571772  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.571940  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.572100  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:30.572110  420597 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:59:30.681622  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560370.653253483
	
	I0419 20:59:30.681655  420597 fix.go:216] guest clock: 1713560370.653253483
	I0419 20:59:30.681666  420597 fix.go:229] Guest: 2024-04-19 20:59:30.653253483 +0000 UTC Remote: 2024-04-19 20:59:30.568250409 +0000 UTC m=+49.743444388 (delta=85.003074ms)
	I0419 20:59:30.681694  420597 fix.go:200] guest clock delta is within tolerance: 85.003074ms
	I0419 20:59:30.681701  420597 start.go:83] releasing machines lock for "no-preload-202684", held for 25.42376948s
	I0419 20:59:30.681733  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.682037  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:30.685108  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.685476  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.685507  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.685843  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686452  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686662  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686757  420597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:59:30.686802  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.686933  420597 ssh_runner.go:195] Run: cat /version.json
	I0419 20:59:30.686961  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.689936  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.689962  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690359  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.690399  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690430  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.690465  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690582  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.690709  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.690787  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.690879  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.690946  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.691046  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.691129  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.691215  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.821054  420597 ssh_runner.go:195] Run: systemctl --version
	I0419 20:59:30.828007  420597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:59:30.997248  420597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:59:31.004724  420597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:59:31.004835  420597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:59:31.025692  420597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:59:31.025721  420597 start.go:494] detecting cgroup driver to use...
	I0419 20:59:31.025797  420597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:59:31.045483  420597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:59:31.060514  420597 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:59:31.060603  420597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:59:31.075492  420597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:59:31.089977  420597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:59:31.212787  420597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:59:31.379339  420597 docker.go:233] disabling docker service ...
	I0419 20:59:31.379434  420597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:59:31.397087  420597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:59:31.414396  420597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:59:31.560930  420597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:59:31.705881  420597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:59:31.723109  420597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:59:31.746490  420597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:59:31.746570  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.758278  420597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:59:31.758363  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.771692  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.783633  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.796030  420597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:59:31.808595  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.822123  420597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.840497  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.851808  420597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:59:31.861994  420597 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:59:31.862062  420597 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:59:31.876810  420597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:59:31.888895  420597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:32.012433  420597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:59:32.154663  420597 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:59:32.154756  420597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:59:32.159531  420597 start.go:562] Will wait 60s for crictl version
	I0419 20:59:32.159618  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.163474  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:59:32.202018  420597 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:59:32.202115  420597 ssh_runner.go:195] Run: crio --version
	I0419 20:59:32.235615  420597 ssh_runner.go:195] Run: crio --version
	I0419 20:59:32.271806  420597 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:59:30.707369  420629 machine.go:94] provisionDockerMachine start ...
	I0419 20:59:30.707397  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.707617  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.710554  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.710959  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.710987  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.711081  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.711269  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711460  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711602  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.711826  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.712100  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.712118  420629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:59:30.834113  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.834151  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834467  420629 buildroot.go:166] provisioning hostname "pause-635451"
	I0419 20:59:30.834501  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834722  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.837964  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838419  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.838469  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838687  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.838889  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839143  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839303  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.839515  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.839734  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.839750  420629 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-635451 && echo "pause-635451" | sudo tee /etc/hostname
	I0419 20:59:30.973130  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.973172  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.976399  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.976936  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.976970  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.977245  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.977516  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977696  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977895  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.978164  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.978377  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.978404  420629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-635451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-635451/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-635451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:31.095515  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:31.095549  420629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:31.095576  420629 buildroot.go:174] setting up certificates
	I0419 20:59:31.095590  420629 provision.go:84] configureAuth start
	I0419 20:59:31.095604  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:31.095912  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:31.098791  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099199  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.099232  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099354  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.101727  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102098  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.102137  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102268  420629 provision.go:143] copyHostCerts
	I0419 20:59:31.102326  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:31.102336  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:31.102385  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:31.102481  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:31.102490  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:31.102509  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:31.102569  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:31.102580  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:31.102596  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:31.102652  420629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.pause-635451 san=[127.0.0.1 192.168.39.194 localhost minikube pause-635451]
	I0419 20:59:31.284651  420629 provision.go:177] copyRemoteCerts
	I0419 20:59:31.284720  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:31.284747  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.287681  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288175  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.288289  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288508  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.288743  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.288920  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.289105  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:31.379802  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:31.414776  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0419 20:59:31.453224  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:59:31.485145  420629 provision.go:87] duration metric: took 389.537969ms to configureAuth
	I0419 20:59:31.485184  420629 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:31.485494  420629 config.go:182] Loaded profile config "pause-635451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:31.485605  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.488458  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.488839  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.488876  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.489023  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.489227  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489434  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489595  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.489836  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:31.490045  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:31.490061  420629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:32.273076  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:32.276013  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:32.276388  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:32.276419  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:32.276620  420597 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0419 20:59:32.280762  420597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:59:32.293872  420597 kubeadm.go:877] updating cluster {Name:no-preload-202684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-202684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.149 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:59:32.293991  420597 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:32.294037  420597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:32.328536  420597 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 20:59:32.328564  420597 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 20:59:32.328651  420597 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:32.328658  420597 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.328669  420597 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.328691  420597 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.328717  420597 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.328726  420597 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0419 20:59:32.328738  420597 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.328704  420597 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.330187  420597 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.330197  420597 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.330187  420597 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.330188  420597 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.330188  420597 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.330199  420597 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:32.330247  420597 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.330249  420597 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0419 20:59:32.498381  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.518860  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0419 20:59:32.528800  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.540908  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.541702  420597 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0419 20:59:32.541753  420597 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.541798  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.549648  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.568736  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.596173  420597 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0419 20:59:32.596240  420597 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0419 20:59:32.596297  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.614844  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.642471  420597 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0419 20:59:32.642533  420597 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.642585  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.685419  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.685455  420597 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0419 20:59:32.685501  420597 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.685547  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.685550  420597 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0419 20:59:32.685948  420597 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0419 20:59:32.686024  420597 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.686087  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.686140  420597 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.686187  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.686146  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0419 20:59:32.727542  420597 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0419 20:59:32.727654  420597 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.727697  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.727603  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.752137  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.752220  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0419 20:59:32.752327  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:32.791617  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0419 20:59:32.791676  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.791694  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.791725  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0419 20:59:32.791699  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.837475  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0419 20:59:32.837537  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0419 20:59:32.837587  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:32.837603  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0419 20:59:32.837630  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:32.837647  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0419 20:59:32.960917  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0419 20:59:32.960944  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0419 20:59:32.960978  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0419 20:59:32.961023  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0419 20:59:32.961045  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:32.961021  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0419 20:59:32.961082  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0419 20:59:32.961098  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0419 20:59:32.961105  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:32.961101  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0419 20:59:32.961045  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:32.961112  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0419 20:59:33.046382  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0419 20:59:33.046433  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0419 20:59:33.046534  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0419 20:59:33.046553  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0419 20:59:33.046595  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0419 20:59:33.046614  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0419 20:59:33.079017  420597 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0419 20:59:33.079106  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0419 20:59:33.128757  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:33.985878  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0419 20:59:33.985932  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:33.985989  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:33.986002  420597 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0419 20:59:33.986070  420597 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:33.986123  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:37.835048  421093 start.go:364] duration metric: took 13.820993299s to acquireMachinesLock for "kubernetes-upgrade-270819"
	I0419 20:59:37.835121  421093 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:59:37.835134  421093 fix.go:54] fixHost starting: 
	I0419 20:59:37.835631  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:37.835683  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:37.853959  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0419 20:59:37.854414  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:37.854915  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:37.854940  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:37.855359  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:37.855567  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:37.855709  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetState
	I0419 20:59:37.863039  421093 fix.go:112] recreateIfNeeded on kubernetes-upgrade-270819: state=Running err=<nil>
	W0419 20:59:37.863332  421093 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:59:37.865178  421093 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-270819" VM ...
	I0419 20:59:37.866575  421093 machine.go:94] provisionDockerMachine start ...
	I0419 20:59:37.866608  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:37.866842  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:37.870718  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:37.870924  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:37.870954  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:37.871297  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:37.871487  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:37.871670  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:37.871912  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:37.872107  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:37.872357  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:37.872374  421093 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:59:37.997044  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-270819
	
	I0419 20:59:37.997082  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:37.997407  421093 buildroot.go:166] provisioning hostname "kubernetes-upgrade-270819"
	I0419 20:59:37.997439  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:37.997626  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.001261  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.001742  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.001774  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.002004  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.002249  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.002382  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.002479  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.002624  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.002854  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.002874  421093 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-270819 && echo "kubernetes-upgrade-270819" | sudo tee /etc/hostname
	I0419 20:59:38.143534  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-270819
	
	I0419 20:59:38.143572  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.146776  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.147136  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.147173  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.147382  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.147584  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.147774  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.147922  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.148136  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.148350  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.148370  421093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-270819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-270819/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-270819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:38.259190  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:38.259226  421093 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:38.259250  421093 buildroot.go:174] setting up certificates
	I0419 20:59:38.259260  421093 provision.go:84] configureAuth start
	I0419 20:59:38.259273  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:38.259608  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:59:38.263019  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.263462  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.263492  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.263650  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.266661  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.267039  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.267082  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.267363  421093 provision.go:143] copyHostCerts
	I0419 20:59:38.267425  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:38.267435  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:38.267476  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:38.267571  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:38.267579  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:38.267600  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:38.267686  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:38.267697  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:38.267723  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:38.267815  421093 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-270819 san=[127.0.0.1 192.168.50.60 kubernetes-upgrade-270819 localhost minikube]
	I0419 20:59:38.404665  421093 provision.go:177] copyRemoteCerts
	I0419 20:59:38.404736  421093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:38.404766  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.408075  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.408495  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.408542  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.408749  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.408997  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.409185  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.409326  421093 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:59:38.497449  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:59:38.537933  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:38.580349  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0419 20:59:38.615657  421093 provision.go:87] duration metric: took 356.381354ms to configureAuth
	I0419 20:59:38.615698  421093 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:38.615897  421093 config.go:182] Loaded profile config "kubernetes-upgrade-270819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:38.615992  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.619079  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.619558  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.619592  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.619825  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.620072  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.620290  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.620471  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.620729  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.620985  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.621011  421093 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:36.073173  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.087160464s)
	I0419 20:59:36.073209  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0419 20:59:36.073174  420597 ssh_runner.go:235] Completed: which crictl: (2.087022041s)
	I0419 20:59:36.073238  420597 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:36.073293  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:36.073298  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:36.119621  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 20:59:36.119733  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0419 20:59:38.966415  420597 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.846650235s)
	I0419 20:59:38.966468  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0419 20:59:38.966521  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0419 20:59:38.966547  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.893224823s)
	I0419 20:59:38.966574  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0419 20:59:38.966605  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:38.966663  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:37.569036  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:59:37.569065  420629 machine.go:97] duration metric: took 6.861676347s to provisionDockerMachine
	I0419 20:59:37.569080  420629 start.go:293] postStartSetup for "pause-635451" (driver="kvm2")
	I0419 20:59:37.569094  420629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:59:37.569116  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.569460  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:59:37.569494  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.572897  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573277  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.573315  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573533  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.573764  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.573958  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.574113  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.660176  420629 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:59:37.666228  420629 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:59:37.666266  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:59:37.666349  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:59:37.666478  420629 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:59:37.666642  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:59:37.676576  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:37.708706  420629 start.go:296] duration metric: took 139.607002ms for postStartSetup
	I0419 20:59:37.708760  420629 fix.go:56] duration metric: took 7.02684919s for fixHost
	I0419 20:59:37.708788  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.712071  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712499  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.712529  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712815  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.713047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713204  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713363  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.713652  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:37.713867  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:37.713880  420629 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:59:37.834856  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560377.829171571
	
	I0419 20:59:37.834879  420629 fix.go:216] guest clock: 1713560377.829171571
	I0419 20:59:37.834901  420629 fix.go:229] Guest: 2024-04-19 20:59:37.829171571 +0000 UTC Remote: 2024-04-19 20:59:37.708765693 +0000 UTC m=+55.281230832 (delta=120.405878ms)
	I0419 20:59:37.834942  420629 fix.go:200] guest clock delta is within tolerance: 120.405878ms
	I0419 20:59:37.834949  420629 start.go:83] releasing machines lock for "pause-635451", held for 7.153078334s
	I0419 20:59:37.834980  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.835295  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:37.838347  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.838813  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.838856  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.839035  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839677  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839881  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839997  420629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:59:37.840047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.840159  420629 ssh_runner.go:195] Run: cat /version.json
	I0419 20:59:37.840190  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.842981  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843313  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843412  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843432  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843585  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843744  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843770  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.843816  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843892  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843961  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844030  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.844258  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844253  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.844411  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.968233  420629 ssh_runner.go:195] Run: systemctl --version
	I0419 20:59:37.976324  420629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:59:38.150175  420629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:59:38.158300  420629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:59:38.158367  420629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:59:38.169280  420629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:59:38.169311  420629 start.go:494] detecting cgroup driver to use...
	I0419 20:59:38.169396  420629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:59:38.187789  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:59:38.207289  420629 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:59:38.207348  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:59:38.224653  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:59:38.240107  420629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:59:38.440705  420629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:59:38.706842  420629 docker.go:233] disabling docker service ...
	I0419 20:59:38.706930  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:59:38.861517  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:59:38.956442  420629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:59:39.280078  420629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:59:39.664803  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:59:39.718459  420629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:59:39.788624  420629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:59:39.788711  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.812125  420629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:59:39.812221  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.827569  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.846551  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.864973  420629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:59:39.893015  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.911102  420629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.929726  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.944963  420629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:59:39.961238  420629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:59:39.976051  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:40.144723  420629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:59:40.736956  420629 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:59:40.737087  420629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:59:40.744406  420629 start.go:562] Will wait 60s for crictl version
	I0419 20:59:40.744476  420629 ssh_runner.go:195] Run: which crictl
	I0419 20:59:40.749493  420629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:59:40.808992  420629 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:59:40.809069  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:40.845299  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:41.074474  420629 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:59:37.319116  420033 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0419 20:59:37.327299  420033 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:59:37.327556  420033 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:59:41.075946  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:41.078826  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079218  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:41.079239  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079530  420629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:59:41.084700  420629 kubeadm.go:877] updating cluster {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:59:41.084849  420629 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:41.084908  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.132194  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.132218  420629 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:59:41.132265  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.168982  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.169013  420629 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:59:41.169022  420629 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0419 20:59:41.169132  420629 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-635451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:59:41.169195  420629 ssh_runner.go:195] Run: crio config
	I0419 20:59:41.224802  420629 cni.go:84] Creating CNI manager for ""
	I0419 20:59:41.224834  420629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:59:41.224861  420629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:59:41.224889  420629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-635451 NodeName:pause-635451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:59:41.225088  420629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-635451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:59:41.225164  420629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:59:41.236438  420629 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:59:41.236515  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:59:41.248119  420629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0419 20:59:41.266196  420629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:59:41.283660  420629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0419 20:59:41.301075  420629 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0419 20:59:41.305465  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:41.441651  420629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:59:41.459496  420629 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451 for IP: 192.168.39.194
	I0419 20:59:41.459527  420629 certs.go:194] generating shared ca certs ...
	I0419 20:59:41.459557  420629 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:59:41.459737  420629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:59:41.459797  420629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:59:41.459811  420629 certs.go:256] generating profile certs ...
	I0419 20:59:41.459920  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/client.key
	I0419 20:59:41.459999  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key.3d8dbd07
	I0419 20:59:41.460048  420629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key
	I0419 20:59:41.460206  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:59:41.460248  420629 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:59:41.460262  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:59:41.460296  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:59:41.460323  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:59:41.460413  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:59:41.460486  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:41.461374  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:59:41.486107  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:59:41.511022  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:59:41.536821  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:59:41.562123  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 20:59:41.660383  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:59:41.835053  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:59:41.975418  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:59:42.047847  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:59:42.117661  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:59:42.151760  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:59:42.202327  420629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:59:42.222270  420629 ssh_runner.go:195] Run: openssl version
	I0419 20:59:42.228883  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:59:42.239871  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246035  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246103  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.260668  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:59:42.288295  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:59:42.303259  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310347  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310421  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.317695  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:59:42.334472  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:59:42.352858  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366362  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366436  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.375115  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:59:42.397057  420629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:59:42.402305  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:59:42.408904  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:59:42.415914  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:59:42.422919  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:59:42.429906  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:59:42.438783  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:59:42.445427  420629 kubeadm.go:391] StartCluster: {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:42.445588  420629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:59:42.445650  420629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:59:42.485993  420629 cri.go:89] found id: "05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc"
	I0419 20:59:42.486023  420629 cri.go:89] found id: "3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f"
	I0419 20:59:42.486029  420629 cri.go:89] found id: "3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688"
	I0419 20:59:42.486034  420629 cri.go:89] found id: "966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c"
	I0419 20:59:42.486040  420629 cri.go:89] found id: "69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a"
	I0419 20:59:42.486044  420629 cri.go:89] found id: "4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2"
	I0419 20:59:42.486048  420629 cri.go:89] found id: "2b06ac09466c5939151d2c0f4169e8cb738cd1dd809ca6319d9e102e97a5c12a"
	I0419 20:59:42.486052  420629 cri.go:89] found id: ""
	I0419 20:59:42.486105  420629 ssh_runner.go:195] Run: sudo runc list -f json
	I0419 20:59:41.150278  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.183579353s)
	I0419 20:59:41.150316  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0419 20:59:41.150342  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:41.150402  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:43.316013  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.165576886s)
	I0419 20:59:43.316078  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0419 20:59:43.316117  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:43.316173  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:45.677302  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.361098816s)
	I0419 20:59:45.677334  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0419 20:59:45.677371  420597 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:45.677448  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:42.328269  420033 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:59:42.328549  420033 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Apr 19 21:00:11 pause-635451 crio[2727]: time="2024-04-19 21:00:11.954367728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560411954338962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd8e6a3b-44b2-4bef-a848-f620ed70fe62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:11 pause-635451 crio[2727]: time="2024-04-19 21:00:11.955381927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=331c483a-20c6-4235-92f7-50c6f332698e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:11 pause-635451 crio[2727]: time="2024-04-19 21:00:11.955449568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=331c483a-20c6-4235-92f7-50c6f332698e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:11 pause-635451 crio[2727]: time="2024-04-19 21:00:11.955799442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=331c483a-20c6-4235-92f7-50c6f332698e name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.023011763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7ade4fb-e765-410b-af44-ec2298a9e234 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.023140012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7ade4fb-e765-410b-af44-ec2298a9e234 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.024276085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8af61fe-b321-4caf-a453-5d4e8c82754c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.025033110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560412024995986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8af61fe-b321-4caf-a453-5d4e8c82754c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.025794665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d6d5d79-a72f-4a03-a6de-e85ea62bfbb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.025893927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d6d5d79-a72f-4a03-a6de-e85ea62bfbb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.026233154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d6d5d79-a72f-4a03-a6de-e85ea62bfbb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.076790564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6643af9a-cc74-4a2b-9de1-ad94d427f306 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.076922180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6643af9a-cc74-4a2b-9de1-ad94d427f306 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.078383243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c51effa6-196b-4335-8460-89abe4f9f961 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.079075803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560412079040935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c51effa6-196b-4335-8460-89abe4f9f961 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.079957338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09876562-1187-4a9a-a4bb-6f407f92789c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.080036218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09876562-1187-4a9a-a4bb-6f407f92789c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.080445488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09876562-1187-4a9a-a4bb-6f407f92789c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.125467929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50dedc99-af25-4293-8946-5d39558c0dd2 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.125806704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50dedc99-af25-4293-8946-5d39558c0dd2 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.128275818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb090167-be9a-4937-b812-ddb894c2d7d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.129211735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560412129180128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb090167-be9a-4937-b812-ddb894c2d7d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.129963787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adcdaab8-f33b-4d4a-ad85-5fdc2e6c94a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.130035029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adcdaab8-f33b-4d4a-ad85-5fdc2e6c94a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:12 pause-635451 crio[2727]: time="2024-04-19 21:00:12.130364644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adcdaab8-f33b-4d4a-ad85-5fdc2e6c94a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71b0fcef59eb4       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   16 seconds ago      Running             kube-proxy                2                   c3063be3674ca       kube-proxy-htrpl
	a9164b4a6e2d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago      Running             coredns                   2                   437b207ae6e5c       coredns-7db6d8ff4d-kdzqp
	1527fc950a7e5       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   16 seconds ago      Running             kube-scheduler            2                   5336fe835cda5       kube-scheduler-pause-635451
	59e56c4b1b47e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   17 seconds ago      Running             kube-controller-manager   2                   c322b07967f55       kube-controller-manager-pause-635451
	b41821fec9389       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   7b56944d9345c       etcd-pause-635451
	74e2dafa9f68b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   21 seconds ago      Running             kube-apiserver            2                   1898ecf7e47f5       kube-apiserver-pause-635451
	05da0476b25f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   437b207ae6e5c       coredns-7db6d8ff4d-kdzqp
	3d71002c43260       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   32 seconds ago      Exited              kube-controller-manager   1                   936f48449dffe       kube-controller-manager-pause-635451
	3f576aaf9e9ac       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   33 seconds ago      Exited              kube-proxy                1                   71f2637616441       kube-proxy-htrpl
	966cdde8876c5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   33 seconds ago      Exited              etcd                      1                   b1fc26c3e6abb       etcd-pause-635451
	69065f054f185       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   33 seconds ago      Exited              kube-scheduler            1                   dd7e79bdfd3d8       kube-scheduler-pause-635451
	4b96ad4f8e464       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   33 seconds ago      Exited              kube-apiserver            1                   eaad491048a1b       kube-apiserver-pause-635451
	
	
	==> coredns [05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53034 - 798 "HINFO IN 3218961398787753936.8471243877830540174. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016503608s
	
	
	==> coredns [a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39485 - 62323 "HINFO IN 782361804124066944.5855134252363606121. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010080698s
	
	
	==> describe nodes <==
	Name:               pause-635451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-635451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=pause-635451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_57_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:57:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-635451
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 21:00:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    pause-635451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a76f5d1555d4c058bf01af025880694
	  System UUID:                6a76f5d1-555d-4c05-8bf0-1af025880694
	  Boot ID:                    ad778ef9-e879-4dd3-a365-2faa099aab85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-kdzqp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m12s
	  kube-system                 etcd-pause-635451                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-apiserver-pause-635451             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-controller-manager-pause-635451    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-proxy-htrpl                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-pause-635451             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m11s              kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m28s              kubelet          Node pause-635451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m28s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m28s              kubelet          Node pause-635451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s              kubelet          Node pause-635451 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m28s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m26s              kubelet          Node pause-635451 status is now: NodeReady
	  Normal  RegisteredNode           2m14s              node-controller  Node pause-635451 event: Registered Node pause-635451 in Controller
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node pause-635451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node pause-635451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node pause-635451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node pause-635451 event: Registered Node pause-635451 in Controller
	
	
	==> dmesg <==
	[  +9.551093] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066935] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.185309] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.159094] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.298736] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.691706] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062737] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.354551] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.767149] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.820921] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.083835] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.943250] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.101097] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:58] kauditd_printk_skb: 69 callbacks suppressed
	[Apr19 20:59] systemd-fstab-generator[2165]: Ignoring "noauto" option for root device
	[  +0.251142] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.569864] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.358522] systemd-fstab-generator[2568]: Ignoring "noauto" option for root device
	[  +0.548602] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +1.315344] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +3.447997] kauditd_printk_skb: 243 callbacks suppressed
	[  +9.220327] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +2.797483] kauditd_printk_skb: 44 callbacks suppressed
	[Apr19 21:00] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	
	
	==> etcd [966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c] <==
	{"level":"info","ts":"2024-04-19T20:59:39.536221Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"44.894246ms"}
	{"level":"info","ts":"2024-04-19T20:59:39.551137Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-19T20:59:39.617375Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","commit-index":441}
	{"level":"info","ts":"2024-04-19T20:59:39.61767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-19T20:59:39.617783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became follower at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:39.617798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b4bd7d4638784c91 [peers: [], term: 2, commit: 441, applied: 0, lastindex: 441, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-19T20:59:39.641815Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-19T20:59:39.684285Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":409}
	{"level":"info","ts":"2024-04-19T20:59:39.689386Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-19T20:59:39.693782Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b4bd7d4638784c91","timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:59:39.694153Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b4bd7d4638784c91"}
	{"level":"info","ts":"2024-04-19T20:59:39.694199Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b4bd7d4638784c91","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-19T20:59:39.694778Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-19T20:59:39.694961Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.695031Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.695041Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.69532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2024-04-19T20:59:39.695385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2024-04-19T20:59:39.700711Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:39.700756Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T20:59:39.700767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:39.70111Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:59:39.70119Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:59:39.701359Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:39.701365Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	
	
	==> etcd [b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c] <==
	{"level":"info","ts":"2024-04-19T20:59:51.338417Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:59:51.338535Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:59:51.33864Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:51.338679Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:52.707819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.70797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.707978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.708014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.712158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:59:52.713996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-19T20:59:52.714296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:59:52.715821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-19T20:59:52.712101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:pause-635451 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:59:52.720654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T20:59:52.720669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-04-19T20:59:56.278072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.872282ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348214989032130 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-635451\" mod_revision:430 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-635451\" value_size:5593 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-635451\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-19T20:59:56.27817Z","caller":"traceutil/trace.go:171","msg":"trace[632608803] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:478; }","duration":"444.792391ms","start":"2024-04-19T20:59:55.833361Z","end":"2024-04-19T20:59:56.278154Z","steps":["trace[632608803] 'read index received'  (duration: 322.897021ms)","trace[632608803] 'applied index is now lower than readState.Index'  (duration: 121.894523ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:59:56.278318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"444.950633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:1 size:1930"}
	{"level":"info","ts":"2024-04-19T20:59:56.27835Z","caller":"traceutil/trace.go:171","msg":"trace[1678152905] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:439; }","duration":"444.990905ms","start":"2024-04-19T20:59:55.833339Z","end":"2024-04-19T20:59:56.27833Z","steps":["trace[1678152905] 'agreement among raft nodes before linearized reading'  (duration: 444.856585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:59:56.278373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:59:55.833326Z","time spent":"445.042333ms","remote":"127.0.0.1:45866","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":1954,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" "}
	{"level":"info","ts":"2024-04-19T20:59:56.278824Z","caller":"traceutil/trace.go:171","msg":"trace[1215252187] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"471.760768ms","start":"2024-04-19T20:59:55.807045Z","end":"2024-04-19T20:59:56.278806Z","steps":["trace[1215252187] 'process raft request'  (duration: 349.260846ms)","trace[1215252187] 'compare'  (duration: 120.801762ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:59:56.278896Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:59:55.80703Z","time spent":"471.828451ms","remote":"127.0.0.1:45726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5645,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-635451\" mod_revision:430 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-635451\" value_size:5593 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-635451\" > >"}
	
	
	==> kernel <==
	 21:00:12 up 3 min,  0 users,  load average: 1.11, 0.47, 0.18
	Linux pause-635451 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2] <==
	I0419 20:59:39.603565       1 options.go:221] external host was not specified, using 192.168.39.194
	I0419 20:59:39.604897       1 server.go:148] Version: v1.30.0
	I0419 20:59:39.604933       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100] <==
	I0419 20:59:54.771087       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:59:54.771163       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:59:54.771736       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:59:54.771818       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:59:54.771846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:59:54.771870       1 cache.go:39] Caches are synced for autoregister controller
	E0419 20:59:54.781011       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0419 20:59:54.796435       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 20:59:54.800966       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:59:54.801015       1 policy_source.go:224] refreshing policies
	I0419 20:59:54.856872       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:59:55.689234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 20:59:56.348173       1 trace.go:236] Trace[1917383722]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6eae9f10-d2cf-426e-89f0-b48984ccce19,client:192.168.39.194,api-group:,api-version:v1,name:etcd-pause-635451,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-635451/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (19-Apr-2024 20:59:55.778) (total time: 569ms):
	Trace[1917383722]: ["GuaranteedUpdate etcd3" audit-id:6eae9f10-d2cf-426e-89f0-b48984ccce19,key:/pods/kube-system/etcd-pause-635451,type:*core.Pod,resource:pods 569ms (20:59:55.778)
	Trace[1917383722]:  ---"Txn call completed" 541ms (20:59:56.343)]
	Trace[1917383722]: ---"About to check admission control" 16ms (20:59:55.795)
	Trace[1917383722]: ---"Object stored in database" 548ms (20:59:56.343)
	Trace[1917383722]: [569.322831ms] [569.322831ms] END
	I0419 20:59:57.063278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 20:59:57.077424       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:59:57.138881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:59:57.181078       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 20:59:57.190271       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 21:00:07.814929       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 21:00:07.820215       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f] <==
	
	
	==> kube-controller-manager [59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8] <==
	I0419 21:00:07.841929       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-635451"
	I0419 21:00:07.842091       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 21:00:07.846023       1 shared_informer.go:320] Caches are synced for deployment
	I0419 21:00:07.848914       1 shared_informer.go:320] Caches are synced for disruption
	I0419 21:00:07.853366       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 21:00:07.855905       1 shared_informer.go:320] Caches are synced for HPA
	I0419 21:00:07.858696       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 21:00:07.858842       1 shared_informer.go:320] Caches are synced for expand
	I0419 21:00:07.861117       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 21:00:07.863998       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 21:00:07.885812       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 21:00:07.886129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="169.209µs"
	I0419 21:00:07.919677       1 shared_informer.go:320] Caches are synced for job
	I0419 21:00:07.935727       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 21:00:07.936266       1 shared_informer.go:320] Caches are synced for service account
	I0419 21:00:07.941675       1 shared_informer.go:320] Caches are synced for namespace
	I0419 21:00:07.993904       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 21:00:08.014744       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 21:00:08.033575       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 21:00:08.055742       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 21:00:08.070670       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 21:00:08.101573       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 21:00:08.498309       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 21:00:08.498403       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 21:00:08.512924       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688] <==
	
	
	==> kube-proxy [71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c] <==
	I0419 20:59:56.496357       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:59:56.514816       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0419 20:59:56.596191       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:59:56.596298       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:59:56.596336       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:59:56.608186       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:59:56.608384       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:59:56.608424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:59:56.611834       1 config.go:192] "Starting service config controller"
	I0419 20:59:56.611877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:59:56.611902       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:59:56.611906       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:59:56.612235       1 config.go:319] "Starting node config controller"
	I0419 20:59:56.612272       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:59:56.713729       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:59:56.713859       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:59:56.714435       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0] <==
	I0419 20:59:56.253695       1 serving.go:380] Generated self-signed cert in-memory
	I0419 20:59:57.224389       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 20:59:57.224544       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:59:57.230693       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 20:59:57.231145       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 20:59:57.231269       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 20:59:57.231424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:59:57.233127       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 20:59:57.233323       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 20:59:57.233435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 20:59:57.233464       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 20:59:57.331937       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 20:59:57.334406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 20:59:57.334475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a] <==
	
	
	==> kubelet <==
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826593    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-k8s-certs\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826673    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826736    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa7473271e234f02b080034925c004d9-kubeconfig\") pod \"kube-scheduler-pause-635451\" (UID: \"fa7473271e234f02b080034925c004d9\") " pod="kube-system/kube-scheduler-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826779    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bfed9f7b1b9fc24cfca8d82324ef4c44-etcd-certs\") pod \"etcd-pause-635451\" (UID: \"bfed9f7b1b9fc24cfca8d82324ef4c44\") " pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826833    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/bfed9f7b1b9fc24cfca8d82324ef4c44-etcd-data\") pod \"etcd-pause-635451\" (UID: \"bfed9f7b1b9fc24cfca8d82324ef4c44\") " pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826881    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-ca-certs\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826936    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-k8s-certs\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827031    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-usr-share-ca-certificates\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827109    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-ca-certs\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827193    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-flexvolume-dir\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827314    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-kubeconfig\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.887711    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-635451\" already exists" pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.888764    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-635451\" already exists" pod="kube-system/kube-scheduler-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.889051    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-635451\" already exists" pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.889883    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-635451\" already exists" pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.188235    3589 scope.go:117] "RemoveContainer" containerID="3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.189668    3589 scope.go:117] "RemoveContainer" containerID="69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.380256    3589 apiserver.go:52] "Watching apiserver"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.389940    3589 topology_manager.go:215] "Topology Admit Handler" podUID="d2283d8e-90e8-4216-9469-241c55639a22" podNamespace="kube-system" podName="kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.390401    3589 topology_manager.go:215] "Topology Admit Handler" podUID="8fbe83c2-1b3b-4877-9801-03db25f6f671" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kdzqp"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.412806    3589 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.432708    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2283d8e-90e8-4216-9469-241c55639a22-lib-modules\") pod \"kube-proxy-htrpl\" (UID: \"d2283d8e-90e8-4216-9469-241c55639a22\") " pod="kube-system/kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.433278    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2283d8e-90e8-4216-9469-241c55639a22-xtables-lock\") pod \"kube-proxy-htrpl\" (UID: \"d2283d8e-90e8-4216-9469-241c55639a22\") " pod="kube-system/kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.691349    3589 scope.go:117] "RemoveContainer" containerID="3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.691960    3589 scope.go:117] "RemoveContainer" containerID="05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 21:00:11.592961  421417 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-635451 -n pause-635451
helpers_test.go:261: (dbg) Run:  kubectl --context pause-635451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-635451 -n pause-635451
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-635451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-635451 logs -n 25: (1.712719831s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl status containerd            |                           |         |                |                     |                     |
	|         | --all --full --no-pager                |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl cat containerd               |                           |         |                |                     |                     |
	|         | --no-pager                             |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo cat              | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo                  | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo find             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-752991 sudo crio             | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-752991                       | cilium-752991             | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-683947              | running-upgrade-683947    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:57 UTC |
	| start   | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:57 UTC | 19 Apr 24 20:58 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-725675 ssh cat      | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-725675           | force-systemd-flag-725675 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p old-k8s-version-771336              | old-k8s-version-771336    | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --kvm-network=default                  |                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |                |                     |                     |
	|         | --disable-driver-mounts                |                           |         |                |                     |                     |
	|         | --keep-context=false                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:59 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-465658 ssh                | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-465658 -- sudo         | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-465658                 | cert-options-465658       | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 20:58 UTC |
	| start   | -p no-preload-202684                   | no-preload-202684         | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |                |                     |                     |
	|         |  --container-runtime=crio              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	| start   | -p pause-635451                        | pause-635451              | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:58 UTC | 19 Apr 24 21:00 UTC |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-270819           | kubernetes-upgrade-270819 | jenkins | v1.33.0-beta.0 | 19 Apr 24 20:59 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0           |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 20:59:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 20:59:23.900280  421093 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:59:23.900418  421093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:59:23.900426  421093 out.go:304] Setting ErrFile to fd 2...
	I0419 20:59:23.900431  421093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:59:23.900607  421093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:59:23.901211  421093 out.go:298] Setting JSON to false
	I0419 20:59:23.902184  421093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9710,"bootTime":1713550654,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:59:23.902256  421093 start.go:139] virtualization: kvm guest
	I0419 20:59:23.904456  421093 out.go:177] * [kubernetes-upgrade-270819] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:59:23.906010  421093 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:59:23.905955  421093 notify.go:220] Checking for updates...
	I0419 20:59:23.907592  421093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:59:23.909100  421093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:59:23.910655  421093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:59:23.912296  421093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:59:23.913912  421093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:59:23.915900  421093 config.go:182] Loaded profile config "kubernetes-upgrade-270819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:23.916501  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:23.916563  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:23.933132  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0419 20:59:23.933601  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:23.934231  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:23.934249  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:23.934615  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:23.934859  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:23.935156  421093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:59:23.935452  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:23.935487  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:23.951448  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0419 20:59:23.952012  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:23.952513  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:23.952540  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:23.952908  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:23.953096  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:23.991318  421093 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:59:23.992704  421093 start.go:297] selected driver: kvm2
	I0419 20:59:23.992725  421093 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-270819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:23.992853  421093 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:59:23.993620  421093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:59:23.993708  421093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 20:59:24.009863  421093 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 20:59:24.010304  421093 cni.go:84] Creating CNI manager for ""
	I0419 20:59:24.010323  421093 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:59:24.010379  421093 start.go:340] cluster config:
	{Name:kubernetes-upgrade-270819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-270819 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:24.010511  421093 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 20:59:24.012141  421093 out.go:177] * Starting "kubernetes-upgrade-270819" primary control-plane node in "kubernetes-upgrade-270819" cluster
	I0419 20:59:21.100442  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:21.100940  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find current IP address of domain no-preload-202684 in network mk-no-preload-202684
	I0419 20:59:21.100972  420597 main.go:141] libmachine: (no-preload-202684) DBG | I0419 20:59:21.100883  420840 retry.go:31] will retry after 2.825571039s: waiting for machine to come up
	I0419 20:59:23.930514  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:23.931145  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find current IP address of domain no-preload-202684 in network mk-no-preload-202684
	I0419 20:59:23.931175  420597 main.go:141] libmachine: (no-preload-202684) DBG | I0419 20:59:23.931088  420840 retry.go:31] will retry after 5.224017665s: waiting for machine to come up
	I0419 20:59:24.013490  421093 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:24.013534  421093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 20:59:24.013556  421093 cache.go:56] Caching tarball of preloaded images
	I0419 20:59:24.013654  421093 preload.go:173] Found /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0419 20:59:24.013668  421093 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 20:59:24.013778  421093 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/kubernetes-upgrade-270819/config.json ...
	I0419 20:59:24.014007  421093 start.go:360] acquireMachinesLock for kubernetes-upgrade-270819: {Name:mk8dfdf990ff4929d25c4dd81c09213b6e0a44fd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 20:59:30.681834  420629 start.go:364] duration metric: took 48.049529377s to acquireMachinesLock for "pause-635451"
	I0419 20:59:30.681899  420629 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:59:30.681911  420629 fix.go:54] fixHost starting: 
	I0419 20:59:30.682416  420629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:30.682476  420629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:30.700423  420629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0419 20:59:30.700901  420629 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:30.701510  420629 main.go:141] libmachine: Using API Version  1
	I0419 20:59:30.701541  420629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:30.701940  420629 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:30.702179  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.702341  420629 main.go:141] libmachine: (pause-635451) Calling .GetState
	I0419 20:59:30.703893  420629 fix.go:112] recreateIfNeeded on pause-635451: state=Running err=<nil>
	W0419 20:59:30.703916  420629 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:59:30.705772  420629 out.go:177] * Updating the running kvm2 "pause-635451" VM ...
	I0419 20:59:29.158216  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.158762  420597 main.go:141] libmachine: (no-preload-202684) Found IP for machine: 192.168.61.149
	I0419 20:59:29.158793  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has current primary IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.158803  420597 main.go:141] libmachine: (no-preload-202684) Reserving static IP address...
	I0419 20:59:29.159265  420597 main.go:141] libmachine: (no-preload-202684) DBG | unable to find host DHCP lease matching {name: "no-preload-202684", mac: "52:54:00:e0:be:7b", ip: "192.168.61.149"} in network mk-no-preload-202684
	I0419 20:59:29.237448  420597 main.go:141] libmachine: (no-preload-202684) DBG | Getting to WaitForSSH function...
	I0419 20:59:29.237493  420597 main.go:141] libmachine: (no-preload-202684) Reserved static IP address: 192.168.61.149
	I0419 20:59:29.237527  420597 main.go:141] libmachine: (no-preload-202684) Waiting for SSH to be available...
	I0419 20:59:29.240102  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.240460  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.240498  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.240588  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using SSH client type: external
	I0419 20:59:29.240613  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using SSH private key: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa (-rw-------)
	I0419 20:59:29.240666  420597 main.go:141] libmachine: (no-preload-202684) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0419 20:59:29.240685  420597 main.go:141] libmachine: (no-preload-202684) DBG | About to run SSH command:
	I0419 20:59:29.240715  420597 main.go:141] libmachine: (no-preload-202684) DBG | exit 0
	I0419 20:59:29.369280  420597 main.go:141] libmachine: (no-preload-202684) DBG | SSH cmd err, output: <nil>: 
	I0419 20:59:29.369542  420597 main.go:141] libmachine: (no-preload-202684) KVM machine creation complete!
	I0419 20:59:29.369955  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetConfigRaw
	I0419 20:59:29.370566  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:29.370799  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:29.371057  420597 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0419 20:59:29.371078  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetState
	I0419 20:59:29.372511  420597 main.go:141] libmachine: Detecting operating system of created instance...
	I0419 20:59:29.372547  420597 main.go:141] libmachine: Waiting for SSH to be available...
	I0419 20:59:29.372554  420597 main.go:141] libmachine: Getting to WaitForSSH function...
	I0419 20:59:29.372563  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.374911  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.375306  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.375333  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.375487  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.375681  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.375846  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.376010  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.376213  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.376404  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.376414  420597 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0419 20:59:29.483978  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:29.484005  420597 main.go:141] libmachine: Detecting the provisioner...
	I0419 20:59:29.484013  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.486852  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.487256  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.487282  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.487414  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.487635  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.487792  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.487975  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.488183  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.488353  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.488364  420597 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0419 20:59:29.597614  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0419 20:59:29.597736  420597 main.go:141] libmachine: found compatible host: buildroot
	I0419 20:59:29.597751  420597 main.go:141] libmachine: Provisioning with buildroot...
	I0419 20:59:29.597762  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.598068  420597 buildroot.go:166] provisioning hostname "no-preload-202684"
	I0419 20:59:29.598115  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.598327  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.601018  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.601410  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.601442  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.601616  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.601817  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.601950  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.602073  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.602242  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.602430  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.602443  420597 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-202684 && echo "no-preload-202684" | sudo tee /etc/hostname
	I0419 20:59:29.728251  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-202684
	
	I0419 20:59:29.728285  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.731486  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.731930  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.731981  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.732182  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.732393  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.732598  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.732807  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.733019  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:29.733223  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:29.733247  420597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-202684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-202684/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-202684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:29.850475  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:29.850507  420597 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:29.850527  420597 buildroot.go:174] setting up certificates
	I0419 20:59:29.850540  420597 provision.go:84] configureAuth start
	I0419 20:59:29.850550  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetMachineName
	I0419 20:59:29.850862  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:29.853793  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.854161  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.854196  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.854340  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.856610  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.857017  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.857066  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.857235  420597 provision.go:143] copyHostCerts
	I0419 20:59:29.857300  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:29.857316  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:29.857396  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:29.857521  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:29.857533  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:29.857556  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:29.857632  420597 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:29.857641  420597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:29.857660  420597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:29.857705  420597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.no-preload-202684 san=[127.0.0.1 192.168.61.149 localhost minikube no-preload-202684]
	I0419 20:59:29.946744  420597 provision.go:177] copyRemoteCerts
	I0419 20:59:29.946817  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:29.946855  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:29.949799  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.950143  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:29.950188  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:29.950379  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:29.950576  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:29.950733  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:29.950882  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.036140  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:30.065082  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0419 20:59:30.093417  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:59:30.121282  420597 provision.go:87] duration metric: took 270.729819ms to configureAuth
	I0419 20:59:30.121311  420597 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:30.121517  420597 config.go:182] Loaded profile config "no-preload-202684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:30.121637  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.124371  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.124684  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.124713  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.124900  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.125117  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.125320  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.125483  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.125664  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.125857  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:30.125877  420597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:30.427084  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:59:30.427132  420597 main.go:141] libmachine: Checking connection to Docker...
	I0419 20:59:30.427144  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetURL
	I0419 20:59:30.428546  420597 main.go:141] libmachine: (no-preload-202684) DBG | Using libvirt version 6000000
	I0419 20:59:30.431313  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.431677  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.431716  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.431861  420597 main.go:141] libmachine: Docker is up and running!
	I0419 20:59:30.431881  420597 main.go:141] libmachine: Reticulating splines...
	I0419 20:59:30.431889  420597 client.go:171] duration metric: took 25.148659401s to LocalClient.Create
	I0419 20:59:30.431929  420597 start.go:167] duration metric: took 25.148752115s to libmachine.API.Create "no-preload-202684"
	I0419 20:59:30.431950  420597 start.go:293] postStartSetup for "no-preload-202684" (driver="kvm2")
	I0419 20:59:30.431966  420597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:59:30.431991  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.432282  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:59:30.432317  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.434734  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.435131  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.435160  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.435320  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.435552  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.435676  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.435848  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.519801  420597 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:59:30.525265  420597 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:59:30.525296  420597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:59:30.525369  420597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:59:30.525456  420597 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:59:30.525586  420597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:59:30.536486  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:30.563656  420597 start.go:296] duration metric: took 131.688164ms for postStartSetup
	I0419 20:59:30.563721  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetConfigRaw
	I0419 20:59:30.564460  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:30.567288  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.567635  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.567666  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.567986  420597 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/no-preload-202684/config.json ...
	I0419 20:59:30.568231  420597 start.go:128] duration metric: took 25.310088259s to createHost
	I0419 20:59:30.568266  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.570764  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.571139  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.571164  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.571280  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.571452  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.571627  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.571772  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.571940  420597 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.572100  420597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.149 22 <nil> <nil>}
	I0419 20:59:30.572110  420597 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:59:30.681622  420597 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560370.653253483
	
	I0419 20:59:30.681655  420597 fix.go:216] guest clock: 1713560370.653253483
	I0419 20:59:30.681666  420597 fix.go:229] Guest: 2024-04-19 20:59:30.653253483 +0000 UTC Remote: 2024-04-19 20:59:30.568250409 +0000 UTC m=+49.743444388 (delta=85.003074ms)
	I0419 20:59:30.681694  420597 fix.go:200] guest clock delta is within tolerance: 85.003074ms
	I0419 20:59:30.681701  420597 start.go:83] releasing machines lock for "no-preload-202684", held for 25.42376948s
	I0419 20:59:30.681733  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.682037  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:30.685108  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.685476  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.685507  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.685843  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686452  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686662  420597 main.go:141] libmachine: (no-preload-202684) Calling .DriverName
	I0419 20:59:30.686757  420597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:59:30.686802  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.686933  420597 ssh_runner.go:195] Run: cat /version.json
	I0419 20:59:30.686961  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHHostname
	I0419 20:59:30.689936  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.689962  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690359  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.690399  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690430  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:30.690465  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:30.690582  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.690709  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHPort
	I0419 20:59:30.690787  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.690879  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHKeyPath
	I0419 20:59:30.690946  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.691046  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetSSHUsername
	I0419 20:59:30.691129  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.691215  420597 sshutil.go:53] new ssh client: &{IP:192.168.61.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/no-preload-202684/id_rsa Username:docker}
	I0419 20:59:30.821054  420597 ssh_runner.go:195] Run: systemctl --version
	I0419 20:59:30.828007  420597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:59:30.997248  420597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:59:31.004724  420597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:59:31.004835  420597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:59:31.025692  420597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 20:59:31.025721  420597 start.go:494] detecting cgroup driver to use...
	I0419 20:59:31.025797  420597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:59:31.045483  420597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:59:31.060514  420597 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:59:31.060603  420597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:59:31.075492  420597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:59:31.089977  420597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:59:31.212787  420597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:59:31.379339  420597 docker.go:233] disabling docker service ...
	I0419 20:59:31.379434  420597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:59:31.397087  420597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:59:31.414396  420597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:59:31.560930  420597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:59:31.705881  420597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:59:31.723109  420597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:59:31.746490  420597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:59:31.746570  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.758278  420597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:59:31.758363  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.771692  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.783633  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.796030  420597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:59:31.808595  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.822123  420597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.840497  420597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:31.851808  420597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:59:31.861994  420597 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0419 20:59:31.862062  420597 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0419 20:59:31.876810  420597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:59:31.888895  420597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:32.012433  420597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:59:32.154663  420597 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:59:32.154756  420597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:59:32.159531  420597 start.go:562] Will wait 60s for crictl version
	I0419 20:59:32.159618  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.163474  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:59:32.202018  420597 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:59:32.202115  420597 ssh_runner.go:195] Run: crio --version
	I0419 20:59:32.235615  420597 ssh_runner.go:195] Run: crio --version
	I0419 20:59:32.271806  420597 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:59:30.707369  420629 machine.go:94] provisionDockerMachine start ...
	I0419 20:59:30.707397  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:30.707617  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.710554  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.710959  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.710987  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.711081  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.711269  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711460  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.711602  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.711826  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.712100  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.712118  420629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:59:30.834113  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.834151  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834467  420629 buildroot.go:166] provisioning hostname "pause-635451"
	I0419 20:59:30.834501  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:30.834722  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.837964  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838419  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.838469  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.838687  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.838889  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839143  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.839303  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.839515  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.839734  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.839750  420629 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-635451 && echo "pause-635451" | sudo tee /etc/hostname
	I0419 20:59:30.973130  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-635451
	
	I0419 20:59:30.973172  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:30.976399  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.976936  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:30.976970  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:30.977245  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:30.977516  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977696  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:30.977895  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:30.978164  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:30.978377  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:30.978404  420629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-635451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-635451/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-635451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:31.095515  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:31.095549  420629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:31.095576  420629 buildroot.go:174] setting up certificates
	I0419 20:59:31.095590  420629 provision.go:84] configureAuth start
	I0419 20:59:31.095604  420629 main.go:141] libmachine: (pause-635451) Calling .GetMachineName
	I0419 20:59:31.095912  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:31.098791  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099199  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.099232  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.099354  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.101727  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102098  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.102137  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.102268  420629 provision.go:143] copyHostCerts
	I0419 20:59:31.102326  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:31.102336  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:31.102385  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:31.102481  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:31.102490  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:31.102509  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:31.102569  420629 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:31.102580  420629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:31.102596  420629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:31.102652  420629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.pause-635451 san=[127.0.0.1 192.168.39.194 localhost minikube pause-635451]
	I0419 20:59:31.284651  420629 provision.go:177] copyRemoteCerts
	I0419 20:59:31.284720  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:31.284747  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.287681  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288175  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.288289  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.288508  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.288743  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.288920  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.289105  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:31.379802  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:31.414776  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0419 20:59:31.453224  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 20:59:31.485145  420629 provision.go:87] duration metric: took 389.537969ms to configureAuth
	I0419 20:59:31.485184  420629 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:31.485494  420629 config.go:182] Loaded profile config "pause-635451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:31.485605  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:31.488458  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.488839  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:31.488876  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:31.489023  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:31.489227  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489434  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:31.489595  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:31.489836  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:31.490045  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:31.490061  420629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:32.273076  420597 main.go:141] libmachine: (no-preload-202684) Calling .GetIP
	I0419 20:59:32.276013  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:32.276388  420597 main.go:141] libmachine: (no-preload-202684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:be:7b", ip: ""} in network mk-no-preload-202684: {Iface:virbr2 ExpiryTime:2024-04-19 21:59:21 +0000 UTC Type:0 Mac:52:54:00:e0:be:7b Iaid: IPaddr:192.168.61.149 Prefix:24 Hostname:no-preload-202684 Clientid:01:52:54:00:e0:be:7b}
	I0419 20:59:32.276419  420597 main.go:141] libmachine: (no-preload-202684) DBG | domain no-preload-202684 has defined IP address 192.168.61.149 and MAC address 52:54:00:e0:be:7b in network mk-no-preload-202684
	I0419 20:59:32.276620  420597 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0419 20:59:32.280762  420597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 20:59:32.293872  420597 kubeadm.go:877] updating cluster {Name:no-preload-202684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-202684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.149 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:59:32.293991  420597 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:32.294037  420597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:32.328536  420597 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0419 20:59:32.328564  420597 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 20:59:32.328651  420597 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:32.328658  420597 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.328669  420597 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.328691  420597 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.328717  420597 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.328726  420597 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0419 20:59:32.328738  420597 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.328704  420597 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.330187  420597 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.330197  420597 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.330187  420597 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.330188  420597 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.330188  420597 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.330199  420597 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:32.330247  420597 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.330249  420597 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0419 20:59:32.498381  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.518860  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0419 20:59:32.528800  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.540908  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.541702  420597 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0419 20:59:32.541753  420597 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.541798  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.549648  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.568736  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.596173  420597 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0419 20:59:32.596240  420597 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0419 20:59:32.596297  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.614844  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.642471  420597 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0419 20:59:32.642533  420597 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.642585  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.685419  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0419 20:59:32.685455  420597 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0419 20:59:32.685501  420597 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.685547  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.685550  420597 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0419 20:59:32.685948  420597 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0419 20:59:32.686024  420597 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.686087  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.686140  420597 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.686187  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.686146  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0419 20:59:32.727542  420597 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0419 20:59:32.727654  420597 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.727697  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:32.727603  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0419 20:59:32.752137  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0419 20:59:32.752220  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0419 20:59:32.752327  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:32.791617  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0419 20:59:32.791676  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0419 20:59:32.791694  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0419 20:59:32.791725  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0419 20:59:32.791699  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 20:59:32.837475  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0419 20:59:32.837537  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0419 20:59:32.837587  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:32.837603  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0419 20:59:32.837630  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:32.837647  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0419 20:59:32.960917  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0419 20:59:32.960944  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0419 20:59:32.960978  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0419 20:59:32.961023  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0419 20:59:32.961045  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:32.961021  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0419 20:59:32.961082  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0419 20:59:32.961098  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0419 20:59:32.961105  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:32.961101  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0419 20:59:32.961045  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:32.961112  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0419 20:59:33.046382  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0419 20:59:33.046433  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0419 20:59:33.046534  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0419 20:59:33.046553  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0419 20:59:33.046595  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0419 20:59:33.046614  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0419 20:59:33.079017  420597 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0419 20:59:33.079106  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0419 20:59:33.128757  420597 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:33.985878  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0419 20:59:33.985932  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:33.985989  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0419 20:59:33.986002  420597 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0419 20:59:33.986070  420597 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:33.986123  420597 ssh_runner.go:195] Run: which crictl
	I0419 20:59:37.835048  421093 start.go:364] duration metric: took 13.820993299s to acquireMachinesLock for "kubernetes-upgrade-270819"
	I0419 20:59:37.835121  421093 start.go:96] Skipping create...Using existing machine configuration
	I0419 20:59:37.835134  421093 fix.go:54] fixHost starting: 
	I0419 20:59:37.835631  421093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:59:37.835683  421093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:59:37.853959  421093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0419 20:59:37.854414  421093 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:59:37.854915  421093 main.go:141] libmachine: Using API Version  1
	I0419 20:59:37.854940  421093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:59:37.855359  421093 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:59:37.855567  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:37.855709  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetState
	I0419 20:59:37.863039  421093 fix.go:112] recreateIfNeeded on kubernetes-upgrade-270819: state=Running err=<nil>
	W0419 20:59:37.863332  421093 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 20:59:37.865178  421093 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-270819" VM ...
	I0419 20:59:37.866575  421093 machine.go:94] provisionDockerMachine start ...
	I0419 20:59:37.866608  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .DriverName
	I0419 20:59:37.866842  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:37.870718  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:37.870924  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:37.870954  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:37.871297  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:37.871487  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:37.871670  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:37.871912  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:37.872107  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:37.872357  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:37.872374  421093 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 20:59:37.997044  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-270819
	
	I0419 20:59:37.997082  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:37.997407  421093 buildroot.go:166] provisioning hostname "kubernetes-upgrade-270819"
	I0419 20:59:37.997439  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:37.997626  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.001261  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.001742  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.001774  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.002004  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.002249  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.002382  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.002479  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.002624  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.002854  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.002874  421093 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-270819 && echo "kubernetes-upgrade-270819" | sudo tee /etc/hostname
	I0419 20:59:38.143534  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-270819
	
	I0419 20:59:38.143572  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.146776  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.147136  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.147173  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.147382  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.147584  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.147774  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.147922  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.148136  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.148350  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.148370  421093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-270819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-270819/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-270819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 20:59:38.259190  421093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 20:59:38.259226  421093 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18669-366597/.minikube CaCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18669-366597/.minikube}
	I0419 20:59:38.259250  421093 buildroot.go:174] setting up certificates
	I0419 20:59:38.259260  421093 provision.go:84] configureAuth start
	I0419 20:59:38.259273  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetMachineName
	I0419 20:59:38.259608  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetIP
	I0419 20:59:38.263019  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.263462  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.263492  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.263650  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.266661  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.267039  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.267082  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.267363  421093 provision.go:143] copyHostCerts
	I0419 20:59:38.267425  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem, removing ...
	I0419 20:59:38.267435  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem
	I0419 20:59:38.267476  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/ca.pem (1078 bytes)
	I0419 20:59:38.267571  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem, removing ...
	I0419 20:59:38.267579  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem
	I0419 20:59:38.267600  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/cert.pem (1123 bytes)
	I0419 20:59:38.267686  421093 exec_runner.go:144] found /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem, removing ...
	I0419 20:59:38.267697  421093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem
	I0419 20:59:38.267723  421093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18669-366597/.minikube/key.pem (1679 bytes)
	I0419 20:59:38.267815  421093 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-270819 san=[127.0.0.1 192.168.50.60 kubernetes-upgrade-270819 localhost minikube]
	I0419 20:59:38.404665  421093 provision.go:177] copyRemoteCerts
	I0419 20:59:38.404736  421093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 20:59:38.404766  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.408075  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.408495  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.408542  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.408749  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.408997  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.409185  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.409326  421093 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/kubernetes-upgrade-270819/id_rsa Username:docker}
	I0419 20:59:38.497449  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0419 20:59:38.537933  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 20:59:38.580349  421093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0419 20:59:38.615657  421093 provision.go:87] duration metric: took 356.381354ms to configureAuth
	I0419 20:59:38.615698  421093 buildroot.go:189] setting minikube options for container-runtime
	I0419 20:59:38.615897  421093 config.go:182] Loaded profile config "kubernetes-upgrade-270819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:59:38.615992  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHHostname
	I0419 20:59:38.619079  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.619558  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:4f:ac", ip: ""} in network mk-kubernetes-upgrade-270819: {Iface:virbr4 ExpiryTime:2024-04-19 21:58:57 +0000 UTC Type:0 Mac:52:54:00:0f:4f:ac Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:kubernetes-upgrade-270819 Clientid:01:52:54:00:0f:4f:ac}
	I0419 20:59:38.619592  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) DBG | domain kubernetes-upgrade-270819 has defined IP address 192.168.50.60 and MAC address 52:54:00:0f:4f:ac in network mk-kubernetes-upgrade-270819
	I0419 20:59:38.619825  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHPort
	I0419 20:59:38.620072  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.620290  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHKeyPath
	I0419 20:59:38.620471  421093 main.go:141] libmachine: (kubernetes-upgrade-270819) Calling .GetSSHUsername
	I0419 20:59:38.620729  421093 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:38.620985  421093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0419 20:59:38.621011  421093 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0419 20:59:36.073173  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.087160464s)
	I0419 20:59:36.073209  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0419 20:59:36.073174  420597 ssh_runner.go:235] Completed: which crictl: (2.087022041s)
	I0419 20:59:36.073238  420597 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:36.073293  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0419 20:59:36.073298  420597 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 20:59:36.119621  420597 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 20:59:36.119733  420597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0419 20:59:38.966415  420597 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.846650235s)
	I0419 20:59:38.966468  420597 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0419 20:59:38.966521  420597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0419 20:59:38.966547  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.893224823s)
	I0419 20:59:38.966574  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0419 20:59:38.966605  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:38.966663  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0419 20:59:37.569036  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0419 20:59:37.569065  420629 machine.go:97] duration metric: took 6.861676347s to provisionDockerMachine
	I0419 20:59:37.569080  420629 start.go:293] postStartSetup for "pause-635451" (driver="kvm2")
	I0419 20:59:37.569094  420629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 20:59:37.569116  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.569460  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 20:59:37.569494  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.572897  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573277  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.573315  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.573533  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.573764  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.573958  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.574113  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.660176  420629 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 20:59:37.666228  420629 info.go:137] Remote host: Buildroot 2023.02.9
	I0419 20:59:37.666266  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/addons for local assets ...
	I0419 20:59:37.666349  420629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18669-366597/.minikube/files for local assets ...
	I0419 20:59:37.666478  420629 filesync.go:149] local asset: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem -> 3739982.pem in /etc/ssl/certs
	I0419 20:59:37.666642  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 20:59:37.676576  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:37.708706  420629 start.go:296] duration metric: took 139.607002ms for postStartSetup
	I0419 20:59:37.708760  420629 fix.go:56] duration metric: took 7.02684919s for fixHost
	I0419 20:59:37.708788  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.712071  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712499  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.712529  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.712815  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.713047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713204  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.713363  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.713652  420629 main.go:141] libmachine: Using SSH client type: native
	I0419 20:59:37.713867  420629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0419 20:59:37.713880  420629 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 20:59:37.834856  420629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713560377.829171571
	
	I0419 20:59:37.834879  420629 fix.go:216] guest clock: 1713560377.829171571
	I0419 20:59:37.834901  420629 fix.go:229] Guest: 2024-04-19 20:59:37.829171571 +0000 UTC Remote: 2024-04-19 20:59:37.708765693 +0000 UTC m=+55.281230832 (delta=120.405878ms)
	I0419 20:59:37.834942  420629 fix.go:200] guest clock delta is within tolerance: 120.405878ms
	I0419 20:59:37.834949  420629 start.go:83] releasing machines lock for "pause-635451", held for 7.153078334s
	I0419 20:59:37.834980  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.835295  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:37.838347  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.838813  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.838856  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.839035  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839677  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839881  420629 main.go:141] libmachine: (pause-635451) Calling .DriverName
	I0419 20:59:37.839997  420629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 20:59:37.840047  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.840159  420629 ssh_runner.go:195] Run: cat /version.json
	I0419 20:59:37.840190  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHHostname
	I0419 20:59:37.842981  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843313  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843412  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843432  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843585  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843744  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:37.843770  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.843816  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:37.843892  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHPort
	I0419 20:59:37.843961  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844030  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHKeyPath
	I0419 20:59:37.844258  420629 main.go:141] libmachine: (pause-635451) Calling .GetSSHUsername
	I0419 20:59:37.844253  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.844411  420629 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/pause-635451/id_rsa Username:docker}
	I0419 20:59:37.968233  420629 ssh_runner.go:195] Run: systemctl --version
	I0419 20:59:37.976324  420629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0419 20:59:38.150175  420629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 20:59:38.158300  420629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 20:59:38.158367  420629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0419 20:59:38.169280  420629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0419 20:59:38.169311  420629 start.go:494] detecting cgroup driver to use...
	I0419 20:59:38.169396  420629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 20:59:38.187789  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 20:59:38.207289  420629 docker.go:217] disabling cri-docker service (if available) ...
	I0419 20:59:38.207348  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0419 20:59:38.224653  420629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0419 20:59:38.240107  420629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0419 20:59:38.440705  420629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0419 20:59:38.706842  420629 docker.go:233] disabling docker service ...
	I0419 20:59:38.706930  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0419 20:59:38.861517  420629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0419 20:59:38.956442  420629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0419 20:59:39.280078  420629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0419 20:59:39.664803  420629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0419 20:59:39.718459  420629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 20:59:39.788624  420629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0419 20:59:39.788711  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.812125  420629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0419 20:59:39.812221  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.827569  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.846551  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.864973  420629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 20:59:39.893015  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.911102  420629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.929726  420629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0419 20:59:39.944963  420629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 20:59:39.961238  420629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 20:59:39.976051  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:40.144723  420629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0419 20:59:40.736956  420629 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0419 20:59:40.737087  420629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0419 20:59:40.744406  420629 start.go:562] Will wait 60s for crictl version
	I0419 20:59:40.744476  420629 ssh_runner.go:195] Run: which crictl
	I0419 20:59:40.749493  420629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 20:59:40.808992  420629 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0419 20:59:40.809069  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:40.845299  420629 ssh_runner.go:195] Run: crio --version
	I0419 20:59:41.074474  420629 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0419 20:59:37.319116  420033 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0419 20:59:37.327299  420033 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:59:37.327556  420033 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0419 20:59:41.075946  420629 main.go:141] libmachine: (pause-635451) Calling .GetIP
	I0419 20:59:41.078826  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079218  420629 main.go:141] libmachine: (pause-635451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:f1:ab", ip: ""} in network mk-pause-635451: {Iface:virbr1 ExpiryTime:2024-04-19 21:57:17 +0000 UTC Type:0 Mac:52:54:00:34:f1:ab Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:pause-635451 Clientid:01:52:54:00:34:f1:ab}
	I0419 20:59:41.079239  420629 main.go:141] libmachine: (pause-635451) DBG | domain pause-635451 has defined IP address 192.168.39.194 and MAC address 52:54:00:34:f1:ab in network mk-pause-635451
	I0419 20:59:41.079530  420629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0419 20:59:41.084700  420629 kubeadm.go:877] updating cluster {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0419 20:59:41.084849  420629 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 20:59:41.084908  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.132194  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.132218  420629 crio.go:433] Images already preloaded, skipping extraction
	I0419 20:59:41.132265  420629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0419 20:59:41.168982  420629 crio.go:514] all images are preloaded for cri-o runtime.
	I0419 20:59:41.169013  420629 cache_images.go:84] Images are preloaded, skipping loading
	I0419 20:59:41.169022  420629 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0419 20:59:41.169132  420629 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-635451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 20:59:41.169195  420629 ssh_runner.go:195] Run: crio config
	I0419 20:59:41.224802  420629 cni.go:84] Creating CNI manager for ""
	I0419 20:59:41.224834  420629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 20:59:41.224861  420629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 20:59:41.224889  420629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-635451 NodeName:pause-635451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 20:59:41.225088  420629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-635451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 20:59:41.225164  420629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0419 20:59:41.236438  420629 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 20:59:41.236515  420629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 20:59:41.248119  420629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0419 20:59:41.266196  420629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 20:59:41.283660  420629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0419 20:59:41.301075  420629 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0419 20:59:41.305465  420629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 20:59:41.441651  420629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 20:59:41.459496  420629 certs.go:68] Setting up /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451 for IP: 192.168.39.194
	I0419 20:59:41.459527  420629 certs.go:194] generating shared ca certs ...
	I0419 20:59:41.459557  420629 certs.go:226] acquiring lock for ca certs: {Name:mk54b5c924111f3ab2fd67ac9f06ff07ececabff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 20:59:41.459737  420629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key
	I0419 20:59:41.459797  420629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key
	I0419 20:59:41.459811  420629 certs.go:256] generating profile certs ...
	I0419 20:59:41.459920  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/client.key
	I0419 20:59:41.459999  420629 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key.3d8dbd07
	I0419 20:59:41.460048  420629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key
	I0419 20:59:41.460206  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem (1338 bytes)
	W0419 20:59:41.460248  420629 certs.go:480] ignoring /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998_empty.pem, impossibly tiny 0 bytes
	I0419 20:59:41.460262  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 20:59:41.460296  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/ca.pem (1078 bytes)
	I0419 20:59:41.460323  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/cert.pem (1123 bytes)
	I0419 20:59:41.460413  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/certs/key.pem (1679 bytes)
	I0419 20:59:41.460486  420629 certs.go:484] found cert: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem (1708 bytes)
	I0419 20:59:41.461374  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 20:59:41.486107  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0419 20:59:41.511022  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 20:59:41.536821  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0419 20:59:41.562123  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0419 20:59:41.660383  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 20:59:41.835053  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 20:59:41.975418  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/pause-635451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 20:59:42.047847  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 20:59:42.117661  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/certs/373998.pem --> /usr/share/ca-certificates/373998.pem (1338 bytes)
	I0419 20:59:42.151760  420629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/ssl/certs/3739982.pem --> /usr/share/ca-certificates/3739982.pem (1708 bytes)
	I0419 20:59:42.202327  420629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 20:59:42.222270  420629 ssh_runner.go:195] Run: openssl version
	I0419 20:59:42.228883  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 20:59:42.239871  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246035  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.246103  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 20:59:42.260668  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 20:59:42.288295  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/373998.pem && ln -fs /usr/share/ca-certificates/373998.pem /etc/ssl/certs/373998.pem"
	I0419 20:59:42.303259  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310347  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:59 /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.310421  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/373998.pem
	I0419 20:59:42.317695  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/373998.pem /etc/ssl/certs/51391683.0"
	I0419 20:59:42.334472  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3739982.pem && ln -fs /usr/share/ca-certificates/3739982.pem /etc/ssl/certs/3739982.pem"
	I0419 20:59:42.352858  420629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366362  420629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:59 /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.366436  420629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3739982.pem
	I0419 20:59:42.375115  420629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3739982.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 20:59:42.397057  420629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 20:59:42.402305  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 20:59:42.408904  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 20:59:42.415914  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 20:59:42.422919  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 20:59:42.429906  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 20:59:42.438783  420629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 20:59:42.445427  420629 kubeadm.go:391] StartCluster: {Name:pause-635451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-635451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:59:42.445588  420629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0419 20:59:42.445650  420629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0419 20:59:42.485993  420629 cri.go:89] found id: "05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc"
	I0419 20:59:42.486023  420629 cri.go:89] found id: "3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f"
	I0419 20:59:42.486029  420629 cri.go:89] found id: "3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688"
	I0419 20:59:42.486034  420629 cri.go:89] found id: "966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c"
	I0419 20:59:42.486040  420629 cri.go:89] found id: "69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a"
	I0419 20:59:42.486044  420629 cri.go:89] found id: "4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2"
	I0419 20:59:42.486048  420629 cri.go:89] found id: "2b06ac09466c5939151d2c0f4169e8cb738cd1dd809ca6319d9e102e97a5c12a"
	I0419 20:59:42.486052  420629 cri.go:89] found id: ""
	I0419 20:59:42.486105  420629 ssh_runner.go:195] Run: sudo runc list -f json
	I0419 20:59:41.150278  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.183579353s)
	I0419 20:59:41.150316  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0419 20:59:41.150342  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:41.150402  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0419 20:59:43.316013  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.165576886s)
	I0419 20:59:43.316078  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0419 20:59:43.316117  420597 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:43.316173  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0419 20:59:45.677302  420597 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.361098816s)
	I0419 20:59:45.677334  420597 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18669-366597/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0419 20:59:45.677371  420597 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:45.677448  420597 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0419 20:59:42.328269  420033 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0419 20:59:42.328549  420033 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.331781590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=973c72f6-1f63-4282-942f-dab3af396b37 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.333591923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=678620db-bf27-4c56-8c29-af436938c013 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.333952420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560414333931168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=678620db-bf27-4c56-8c29-af436938c013 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.334766030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b01bcb4d-82df-4e6b-b37f-8fe4a60737a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.334824565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b01bcb4d-82df-4e6b-b37f-8fe4a60737a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.335094538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b01bcb4d-82df-4e6b-b37f-8fe4a60737a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.386593313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a99fe56-1fd5-4c4d-8a33-8c31d85b268e name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.386670354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a99fe56-1fd5-4c4d-8a33-8c31d85b268e name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.388597661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85365d9d-a2e1-450c-a929-2a3adc9e0dc2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.388963024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560414388939251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85365d9d-a2e1-450c-a929-2a3adc9e0dc2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.389825830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=191641fb-d97c-44e9-93db-73c434c9bb52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.389884919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=191641fb-d97c-44e9-93db-73c434c9bb52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.390140350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=191641fb-d97c-44e9-93db-73c434c9bb52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.437186159Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=216dd351-15dc-4e6a-9e33-7127224f73e0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.437442307Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-kdzqp,Uid:8fbe83c2-1b3b-4877-9801-03db25f6f671,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381997089675,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:58:00.321441053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-635451,Uid:fa7473271e234f02b080034925c004d9,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381751153626,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa7473271e234f02b080034925c004d9,kubernetes.io/config.seen: 2024-04-19T20:57:44.741292188Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-635451,Uid:70f072fb557eff82ceb93224ae0c8a6d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381718250864,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557ef
f82ceb93224ae0c8a6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 70f072fb557eff82ceb93224ae0c8a6d,kubernetes.io/config.seen: 2024-04-19T20:57:44.741290708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&PodSandboxMetadata{Name:kube-proxy-htrpl,Uid:d2283d8e-90e8-4216-9469-241c55639a22,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381701660931,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-19T20:57:59.808686471Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-635451,Uid:bfed9f7b1b9fc24cfca8d82324ef4c44,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381659058933,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.194:2379,kubernetes.io/config.hash: bfed9f7b1b9fc24cfca8d82324ef4c44,kubernetes.io/config.seen: 2024-04-19T20:57:44.741284926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-635451,Uid:aad13997a933145777f6a2b13a12fdf2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713560381648197147,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.194:8443,kubernetes.io/config.hash: aad13997a933145777f6a2b13a12fdf2,kubernetes.io/config.seen: 2024-04-19T20:57:44.741288747Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=216dd351-15dc-4e6a-9e33-7127224f73e0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.438541147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a00305eb-5c8e-41df-aa59-63dd5b17214b name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.438632718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a00305eb-5c8e-41df-aa59-63dd5b17214b name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.438802846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a00305eb-5c8e-41df-aa59-63dd5b17214b name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.452366974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4ffda34-850c-4955-8af2-e31fec34df33 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.452460539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4ffda34-850c-4955-8af2-e31fec34df33 name=/runtime.v1.RuntimeService/Version
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.454255318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b995de7-43b3-481b-82dd-dc34bb13d3c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.454850366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713560414454824288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b995de7-43b3-481b-82dd-dc34bb13d3c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.456230046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14eaba5a-9286-4a68-8f52-2dfc1e361a2c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.456334008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14eaba5a-9286-4a68-8f52-2dfc1e361a2c name=/runtime.v1.RuntimeService/ListContainers
	Apr 19 21:00:14 pause-635451 crio[2727]: time="2024-04-19 21:00:14.456754099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713560395724710296,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362edd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c,PodSandboxId:c3063be3674ca5a9f08809326443556829693acedaf6c126356ee072fd53483f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713560395728108403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8,PodSandboxId:c322b07967f55252f3038097fdd473cda4fb097e7dddfb5e502151a062b21e26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713560395204873059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0,PodSandboxId:5336fe835cda5e35bc1633bbdf1dc16aaf35fe7e9ed849334753dab7751812b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713560395221202207,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa
7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c,PodSandboxId:7b56944d9345ce4c1d0590275e065e9460a5d9fb6b0f84a2feb503422bac6e59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713560391090853529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map
[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100,PodSandboxId:1898ecf7e47f5165dc9cb899711c6106a37c59c968c4a5d5873dcdfcf1ff2d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713560390835958826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.
kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc,PodSandboxId:437b207ae6e5cc94cb7ba2cf1797a913e611f4bd89141b9bf520f7b05ca72992,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713560382277915912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kdzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbe83c2-1b3b-4877-9801-03db25f6f671,},Annotations:map[string]string{io.kubernetes.container.hash: 5362e
dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f,PodSandboxId:936f48449dffe0d58ca541ba4078725c751f59accce87a0881b5b8228d85283f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713560379230852488,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f072fb557eff82ceb93224ae0c8a6d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688,PodSandboxId:71f26376164410a040d6859138e566febb57d7d6dbe63ae76828f987a8962975,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713560379139097056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kuberne
tes.pod.name: kube-proxy-htrpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2283d8e-90e8-4216-9469-241c55639a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3c2b4676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c,PodSandboxId:b1fc26c3e6abbc8b2a19e79705acbabcd6be9df73c39c66292f99e07b4448815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713560379112864006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-635451,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: bfed9f7b1b9fc24cfca8d82324ef4c44,},Annotations:map[string]string{io.kubernetes.container.hash: 6bc4fe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a,PodSandboxId:dd7e79bdfd3d8e98be0619dad04484e732856f40eabd865b18baeddeb2616f8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713560378940015661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-635451,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: fa7473271e234f02b080034925c004d9,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2,PodSandboxId:eaad491048a1ba5c60bc83f0fabe8045c28ebe87b36f3b5c9d339e4e421027c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713560378885865528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-635451,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aad13997a933145777f6a2b13a12fdf2,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14eaba5a-9286-4a68-8f52-2dfc1e361a2c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71b0fcef59eb4       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   18 seconds ago      Running             kube-proxy                2                   c3063be3674ca       kube-proxy-htrpl
	a9164b4a6e2d7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   437b207ae6e5c       coredns-7db6d8ff4d-kdzqp
	1527fc950a7e5       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   19 seconds ago      Running             kube-scheduler            2                   5336fe835cda5       kube-scheduler-pause-635451
	59e56c4b1b47e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   19 seconds ago      Running             kube-controller-manager   2                   c322b07967f55       kube-controller-manager-pause-635451
	b41821fec9389       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   7b56944d9345c       etcd-pause-635451
	74e2dafa9f68b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   23 seconds ago      Running             kube-apiserver            2                   1898ecf7e47f5       kube-apiserver-pause-635451
	05da0476b25f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   32 seconds ago      Exited              coredns                   1                   437b207ae6e5c       coredns-7db6d8ff4d-kdzqp
	3d71002c43260       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   35 seconds ago      Exited              kube-controller-manager   1                   936f48449dffe       kube-controller-manager-pause-635451
	3f576aaf9e9ac       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   35 seconds ago      Exited              kube-proxy                1                   71f2637616441       kube-proxy-htrpl
	966cdde8876c5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago      Exited              etcd                      1                   b1fc26c3e6abb       etcd-pause-635451
	69065f054f185       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   35 seconds ago      Exited              kube-scheduler            1                   dd7e79bdfd3d8       kube-scheduler-pause-635451
	4b96ad4f8e464       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   35 seconds ago      Exited              kube-apiserver            1                   eaad491048a1b       kube-apiserver-pause-635451
	
	
	==> coredns [05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53034 - 798 "HINFO IN 3218961398787753936.8471243877830540174. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016503608s
	
	
	==> coredns [a9164b4a6e2d7c8d8b6b48b31accb712bfabf834b5e7da5ed98859c5fbf141e9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39485 - 62323 "HINFO IN 782361804124066944.5855134252363606121. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010080698s
	
	
	==> describe nodes <==
	Name:               pause-635451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-635451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=pause-635451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T20_57_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 20:57:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-635451
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 21:00:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 20:59:54 +0000   Fri, 19 Apr 2024 20:57:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    pause-635451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a76f5d1555d4c058bf01af025880694
	  System UUID:                6a76f5d1-555d-4c05-8bf0-1af025880694
	  Boot ID:                    ad778ef9-e879-4dd3-a365-2faa099aab85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-kdzqp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m14s
	  kube-system                 etcd-pause-635451                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-apiserver-pause-635451             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-controller-manager-pause-635451    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-proxy-htrpl                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-pause-635451             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m13s              kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m30s              kubelet          Node pause-635451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m30s              kubelet          Node pause-635451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s              kubelet          Node pause-635451 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m30s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s              kubelet          Node pause-635451 status is now: NodeReady
	  Normal  RegisteredNode           2m16s              node-controller  Node pause-635451 event: Registered Node pause-635451 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x2 over 20s)  kubelet          Node pause-635451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x2 over 20s)  kubelet          Node pause-635451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x2 over 20s)  kubelet          Node pause-635451 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-635451 event: Registered Node pause-635451 in Controller
	
	
	==> dmesg <==
	[  +9.551093] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.059198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066935] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.185309] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.159094] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.298736] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.691706] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062737] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.354551] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.767149] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.820921] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.083835] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.943250] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.101097] kauditd_printk_skb: 21 callbacks suppressed
	[Apr19 20:58] kauditd_printk_skb: 69 callbacks suppressed
	[Apr19 20:59] systemd-fstab-generator[2165]: Ignoring "noauto" option for root device
	[  +0.251142] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.569864] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.358522] systemd-fstab-generator[2568]: Ignoring "noauto" option for root device
	[  +0.548602] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +1.315344] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +3.447997] kauditd_printk_skb: 243 callbacks suppressed
	[  +9.220327] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +2.797483] kauditd_printk_skb: 44 callbacks suppressed
	[Apr19 21:00] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	
	
	==> etcd [966cdde8876c5b84b6015dd6256b018c3aef57c66f6c3f6d8d14d5ae633d8c9c] <==
	{"level":"info","ts":"2024-04-19T20:59:39.536221Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"44.894246ms"}
	{"level":"info","ts":"2024-04-19T20:59:39.551137Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-19T20:59:39.617375Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","commit-index":441}
	{"level":"info","ts":"2024-04-19T20:59:39.61767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-19T20:59:39.617783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became follower at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:39.617798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b4bd7d4638784c91 [peers: [], term: 2, commit: 441, applied: 0, lastindex: 441, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-19T20:59:39.641815Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-19T20:59:39.684285Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":409}
	{"level":"info","ts":"2024-04-19T20:59:39.689386Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-19T20:59:39.693782Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b4bd7d4638784c91","timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:59:39.694153Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b4bd7d4638784c91"}
	{"level":"info","ts":"2024-04-19T20:59:39.694199Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b4bd7d4638784c91","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-19T20:59:39.694778Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-19T20:59:39.694961Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.695031Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.695041Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-19T20:59:39.69532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2024-04-19T20:59:39.695385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2024-04-19T20:59:39.700711Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:39.700756Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T20:59:39.700767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T20:59:39.70111Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:59:39.70119Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:59:39.701359Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:39.701365Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	
	
	==> etcd [b41821fec9389cc088849462b1a6d9d413a23df22ea832c98e2397dfce410b6c] <==
	{"level":"info","ts":"2024-04-19T20:59:51.338417Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T20:59:51.338535Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T20:59:51.33864Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:51.338679Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-19T20:59:52.707819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2024-04-19T20:59:52.707965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.70797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.707978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.708014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-19T20:59:52.712158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:59:52.713996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-19T20:59:52.714296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T20:59:52.715821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-19T20:59:52.712101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:pause-635451 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T20:59:52.720654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T20:59:52.720669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-04-19T20:59:56.278072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.872282ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348214989032130 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-635451\" mod_revision:430 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-635451\" value_size:5593 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-635451\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-19T20:59:56.27817Z","caller":"traceutil/trace.go:171","msg":"trace[632608803] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:478; }","duration":"444.792391ms","start":"2024-04-19T20:59:55.833361Z","end":"2024-04-19T20:59:56.278154Z","steps":["trace[632608803] 'read index received'  (duration: 322.897021ms)","trace[632608803] 'applied index is now lower than readState.Index'  (duration: 121.894523ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:59:56.278318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"444.950633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:1 size:1930"}
	{"level":"info","ts":"2024-04-19T20:59:56.27835Z","caller":"traceutil/trace.go:171","msg":"trace[1678152905] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:439; }","duration":"444.990905ms","start":"2024-04-19T20:59:55.833339Z","end":"2024-04-19T20:59:56.27833Z","steps":["trace[1678152905] 'agreement among raft nodes before linearized reading'  (duration: 444.856585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-19T20:59:56.278373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:59:55.833326Z","time spent":"445.042333ms","remote":"127.0.0.1:45866","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":1954,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" "}
	{"level":"info","ts":"2024-04-19T20:59:56.278824Z","caller":"traceutil/trace.go:171","msg":"trace[1215252187] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"471.760768ms","start":"2024-04-19T20:59:55.807045Z","end":"2024-04-19T20:59:56.278806Z","steps":["trace[1215252187] 'process raft request'  (duration: 349.260846ms)","trace[1215252187] 'compare'  (duration: 120.801762ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-19T20:59:56.278896Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-19T20:59:55.80703Z","time spent":"471.828451ms","remote":"127.0.0.1:45726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5645,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-635451\" mod_revision:430 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-635451\" value_size:5593 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-635451\" > >"}
	
	
	==> kernel <==
	 21:00:14 up 3 min,  0 users,  load average: 1.11, 0.47, 0.18
	Linux pause-635451 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b96ad4f8e4649d102bb88ca2d44d0280dfa485d95def1b9eb1c3aef3560b0a2] <==
	I0419 20:59:39.603565       1 options.go:221] external host was not specified, using 192.168.39.194
	I0419 20:59:39.604897       1 server.go:148] Version: v1.30.0
	I0419 20:59:39.604933       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [74e2dafa9f68b54e3d12fdbb949ea465537741786e7a1b4d42cc2cb5ce79d100] <==
	I0419 20:59:54.771087       1 shared_informer.go:320] Caches are synced for configmaps
	I0419 20:59:54.771163       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0419 20:59:54.771736       1 aggregator.go:165] initial CRD sync complete...
	I0419 20:59:54.771818       1 autoregister_controller.go:141] Starting autoregister controller
	I0419 20:59:54.771846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0419 20:59:54.771870       1 cache.go:39] Caches are synced for autoregister controller
	E0419 20:59:54.781011       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0419 20:59:54.796435       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0419 20:59:54.800966       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0419 20:59:54.801015       1 policy_source.go:224] refreshing policies
	I0419 20:59:54.856872       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 20:59:55.689234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 20:59:56.348173       1 trace.go:236] Trace[1917383722]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6eae9f10-d2cf-426e-89f0-b48984ccce19,client:192.168.39.194,api-group:,api-version:v1,name:etcd-pause-635451,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-635451/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (19-Apr-2024 20:59:55.778) (total time: 569ms):
	Trace[1917383722]: ["GuaranteedUpdate etcd3" audit-id:6eae9f10-d2cf-426e-89f0-b48984ccce19,key:/pods/kube-system/etcd-pause-635451,type:*core.Pod,resource:pods 569ms (20:59:55.778)
	Trace[1917383722]:  ---"Txn call completed" 541ms (20:59:56.343)]
	Trace[1917383722]: ---"About to check admission control" 16ms (20:59:55.795)
	Trace[1917383722]: ---"Object stored in database" 548ms (20:59:56.343)
	Trace[1917383722]: [569.322831ms] [569.322831ms] END
	I0419 20:59:57.063278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0419 20:59:57.077424       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0419 20:59:57.138881       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0419 20:59:57.181078       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 20:59:57.190271       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 21:00:07.814929       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 21:00:07.820215       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f] <==
	
	
	==> kube-controller-manager [59e56c4b1b47ed66c55758965c99aef8fe47c5938feb8cf57c16cd92f0149ca8] <==
	I0419 21:00:07.841929       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-635451"
	I0419 21:00:07.842091       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0419 21:00:07.846023       1 shared_informer.go:320] Caches are synced for deployment
	I0419 21:00:07.848914       1 shared_informer.go:320] Caches are synced for disruption
	I0419 21:00:07.853366       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0419 21:00:07.855905       1 shared_informer.go:320] Caches are synced for HPA
	I0419 21:00:07.858696       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0419 21:00:07.858842       1 shared_informer.go:320] Caches are synced for expand
	I0419 21:00:07.861117       1 shared_informer.go:320] Caches are synced for PVC protection
	I0419 21:00:07.863998       1 shared_informer.go:320] Caches are synced for PV protection
	I0419 21:00:07.885812       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0419 21:00:07.886129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="169.209µs"
	I0419 21:00:07.919677       1 shared_informer.go:320] Caches are synced for job
	I0419 21:00:07.935727       1 shared_informer.go:320] Caches are synced for cronjob
	I0419 21:00:07.936266       1 shared_informer.go:320] Caches are synced for service account
	I0419 21:00:07.941675       1 shared_informer.go:320] Caches are synced for namespace
	I0419 21:00:07.993904       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0419 21:00:08.014744       1 shared_informer.go:320] Caches are synced for attach detach
	I0419 21:00:08.033575       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 21:00:08.055742       1 shared_informer.go:320] Caches are synced for stateful set
	I0419 21:00:08.070670       1 shared_informer.go:320] Caches are synced for resource quota
	I0419 21:00:08.101573       1 shared_informer.go:320] Caches are synced for daemon sets
	I0419 21:00:08.498309       1 shared_informer.go:320] Caches are synced for garbage collector
	I0419 21:00:08.498403       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0419 21:00:08.512924       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688] <==
	
	
	==> kube-proxy [71b0fcef59eb40d0a7da3075c12e5078193aca55602bddf39b6bfdb549dadf1c] <==
	I0419 20:59:56.496357       1 server_linux.go:69] "Using iptables proxy"
	I0419 20:59:56.514816       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0419 20:59:56.596191       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0419 20:59:56.596298       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0419 20:59:56.596336       1 server_linux.go:165] "Using iptables Proxier"
	I0419 20:59:56.608186       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0419 20:59:56.608384       1 server.go:872] "Version info" version="v1.30.0"
	I0419 20:59:56.608424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:59:56.611834       1 config.go:192] "Starting service config controller"
	I0419 20:59:56.611877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0419 20:59:56.611902       1 config.go:101] "Starting endpoint slice config controller"
	I0419 20:59:56.611906       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0419 20:59:56.612235       1 config.go:319] "Starting node config controller"
	I0419 20:59:56.612272       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0419 20:59:56.713729       1 shared_informer.go:320] Caches are synced for service config
	I0419 20:59:56.713859       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0419 20:59:56.714435       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1527fc950a7e5863885b847a684bec383eaaed790ea99ef0219e057fe02cbee0] <==
	I0419 20:59:56.253695       1 serving.go:380] Generated self-signed cert in-memory
	I0419 20:59:57.224389       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0419 20:59:57.224544       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 20:59:57.230693       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0419 20:59:57.231145       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0419 20:59:57.231269       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0419 20:59:57.231424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0419 20:59:57.233127       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0419 20:59:57.233323       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0419 20:59:57.233435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0419 20:59:57.233464       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 20:59:57.331937       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0419 20:59:57.334406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0419 20:59:57.334475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a] <==
	
	
	==> kubelet <==
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826593    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-k8s-certs\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826673    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826736    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa7473271e234f02b080034925c004d9-kubeconfig\") pod \"kube-scheduler-pause-635451\" (UID: \"fa7473271e234f02b080034925c004d9\") " pod="kube-system/kube-scheduler-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826779    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bfed9f7b1b9fc24cfca8d82324ef4c44-etcd-certs\") pod \"etcd-pause-635451\" (UID: \"bfed9f7b1b9fc24cfca8d82324ef4c44\") " pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826833    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/bfed9f7b1b9fc24cfca8d82324ef4c44-etcd-data\") pod \"etcd-pause-635451\" (UID: \"bfed9f7b1b9fc24cfca8d82324ef4c44\") " pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826881    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-ca-certs\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.826936    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-k8s-certs\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827031    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aad13997a933145777f6a2b13a12fdf2-usr-share-ca-certificates\") pod \"kube-apiserver-pause-635451\" (UID: \"aad13997a933145777f6a2b13a12fdf2\") " pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827109    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-ca-certs\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827193    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-flexvolume-dir\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: I0419 20:59:54.827314    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f072fb557eff82ceb93224ae0c8a6d-kubeconfig\") pod \"kube-controller-manager-pause-635451\" (UID: \"70f072fb557eff82ceb93224ae0c8a6d\") " pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.887711    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-635451\" already exists" pod="kube-system/kube-controller-manager-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.888764    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-635451\" already exists" pod="kube-system/kube-scheduler-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.889051    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-635451\" already exists" pod="kube-system/kube-apiserver-pause-635451"
	Apr 19 20:59:54 pause-635451 kubelet[3589]: E0419 20:59:54.889883    3589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-635451\" already exists" pod="kube-system/etcd-pause-635451"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.188235    3589 scope.go:117] "RemoveContainer" containerID="3d71002c43260699b493b6e9e87fde840cc48feff2ffa00a1f442dcd6fec716f"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.189668    3589 scope.go:117] "RemoveContainer" containerID="69065f054f18523fe63bcac3bd101d941f1bc3ad76170cebb361c9e08eabd28a"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.380256    3589 apiserver.go:52] "Watching apiserver"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.389940    3589 topology_manager.go:215] "Topology Admit Handler" podUID="d2283d8e-90e8-4216-9469-241c55639a22" podNamespace="kube-system" podName="kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.390401    3589 topology_manager.go:215] "Topology Admit Handler" podUID="8fbe83c2-1b3b-4877-9801-03db25f6f671" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kdzqp"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.412806    3589 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.432708    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2283d8e-90e8-4216-9469-241c55639a22-lib-modules\") pod \"kube-proxy-htrpl\" (UID: \"d2283d8e-90e8-4216-9469-241c55639a22\") " pod="kube-system/kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.433278    3589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2283d8e-90e8-4216-9469-241c55639a22-xtables-lock\") pod \"kube-proxy-htrpl\" (UID: \"d2283d8e-90e8-4216-9469-241c55639a22\") " pod="kube-system/kube-proxy-htrpl"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.691349    3589 scope.go:117] "RemoveContainer" containerID="3f576aaf9e9acaa49c6414bd12f303c18492ac62029f1d9e369d8d99e15a9688"
	Apr 19 20:59:55 pause-635451 kubelet[3589]: I0419 20:59:55.691960    3589 scope.go:117] "RemoveContainer" containerID="05da0476b25f0835426f67d51123733891953b0ff9d7de2e384321342653c7dc"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0419 21:00:13.910212  421524 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18669-366597/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-635451 -n pause-635451
helpers_test.go:261: (dbg) Run:  kubectl --context pause-635451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (93.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.071s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
E0419 21:17:10.227538  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.247:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (20m19s)
	TestStartStop (20m30s)
	TestStartStop/group/default-k8s-diff-port (17m8s)
	TestStartStop/group/default-k8s-diff-port/serial (17m8s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (3m30s)
	TestStartStop/group/embed-certs (17m12s)
	TestStartStop/group/embed-certs/serial (17m12s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4m52s)
	TestStartStop/group/no-preload (18m48s)
	TestStartStop/group/no-preload/serial (18m48s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (3m36s)
	TestStartStop/group/old-k8s-version (19m13s)
	TestStartStop/group/old-k8s-version/serial (19m13s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (59s)

                                                
                                                
goroutine 3466 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00003d040, 0xc000873bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00071e510, {0x4955920, 0x2b, 0x2b}, {0x26ad459?, 0xc00090c480?, 0x4a11cc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0012d0d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0012d0d20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006d8d80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 547 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc002142f20, 0xc002679aa0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 546
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 83 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 82
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2338 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00267eb60, {0x267ea8a?, 0x60400000004?}, 0xc0001c2580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00267eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00267eb60, 0xc00052b080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1856
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2690 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e560, 0xc000410c40}, {0x3631c00, 0xc0021e8080}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e560?, 0xc00076a000?}, 0x3b9aca00, 0xc002199e10?, 0x1, 0xc002199c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e560, 0xc00076a000}, 0xc0000f8d00, {0xc00132a1f8, 0x11}, {0x2678d77, 0x14}, {0x269081e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x363e560, 0xc00076a000}, 0xc0000f8d00, {0xc00132a1f8, 0x11}, {0x265e178?, 0xc001332f60?}, {0x552353?, 0x4a26cf?}, {0xc000a1c000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000f8d00, 0xc0001c2580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2338
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2360 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002072480, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 335 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0023aa4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 344
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3121 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022fef00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3120
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2279 [chan receive]:
testing.(*T).Run(0xc00267e000, {0x267ea8a?, 0x60400000004?}, 0xc00203a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00267e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00267e000, 0xc00052a280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1853
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1852 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00239e680, 0x30c0048)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1717
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 336 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0012daec0, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 344
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1855 [chan receive, 17 minutes]:
testing.(*T).Run(0xc00239eb60, {0x2654613?, 0x0?}, 0xc00052b680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00239eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00239eb60, 0xc0028d2240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1852
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2359 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002a25320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2673 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e560, 0xc000472c40}, {0x3631c00, 0xc0023e8560}, 0x1, 0x0, 0xc00205dc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e560?, 0xc00045c150?}, 0x3b9aca00, 0xc002195e10?, 0x1, 0xc002195c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e560, 0xc00045c150}, 0xc002185520, {0xc0025f6000, 0x1c}, {0x2678d77, 0x14}, {0x269081e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x363e560, 0xc00045c150}, 0xc002185520, {0xc0025f6000, 0x1c}, {0x267bc33?, 0xc00229bf60?}, {0x552353?, 0x4a26cf?}, {0xc000a1d900, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002185520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002185520, 0xc00203a500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 428 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026d2160, 0xc002678ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 427
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 227 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7fcdcc450540, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x531?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000896100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000896100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00082e560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00082e560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0012780f0, {0x3631540, 0xc00082e560})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0012780f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x594064?, 0xc002066b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 192
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 457 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023f5600, 0xc002822660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 319
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3172 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3171
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2425 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021fadc0, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 639 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc0023826c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 637
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1974 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00003dba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00003dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00003dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00003dba0, 0xc0001c3380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2318 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0020661a0, {0x267ea8a?, 0x60400000004?}, 0xc000742000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0020661a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0020661a0, 0xc000742080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 640 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc0023826c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 637
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2665 [IO wait]:
internal/poll.runtime_pollWait(0x7fcdcc44fa98, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0021f6000?, 0xc000909000?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0021f6000, {0xc000909000, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0021f6000, {0xc000909000?, 0xc00045edc0?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc002738550, {0xc000909000?, 0xc000909005?, 0x6f?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0021a8df8, {0xc000909000?, 0x0?, 0xc0021a8df8?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0023a70b0, {0x361b060, 0xc0021a8df8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0023a6e08, {0x7fcdccc9a130, 0xc00273b290}, 0xc00268b980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0023a6e08, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0023a6e08, {0xc002049000, 0x1000, 0xc002330380?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0012fd860, {0xc00292c740, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3619520, 0xc0012fd860}, {0xc00292c740, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00292c740, 0x9, 0x268bdc0?}, {0x3619520?, 0xc0012fd860?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00292c700)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00268bfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc002034180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2664
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 1853 [chan receive, 19 minutes]:
testing.(*T).Run(0xc00239e820, {0x2654613?, 0x0?}, 0xc00052a280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00239e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00239e820, 0xc0028d2140)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1852
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1906 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000727520, 0xc0020f0d50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1694
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 362 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0012dae90, 0x23)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0023aa3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0012daec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021ba480, {0x361a8a0, 0xc00209be60}, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021ba480, 0x3b9aca00, 0x0, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 363 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e720, 0xc000140000}, 0xc001334750, 0xc001351f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e720, 0xc000140000}, 0xa0?, 0xc001334750, 0xc001334798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e720?, 0xc000140000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013347d0?, 0x594064?, 0xc000060ea0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 364 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 363
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2350 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e720, 0xc000140000}, 0xc002813750, 0xc001307f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e720, 0xc000140000}, 0x60?, 0xc002813750, 0xc002813798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e720?, 0xc000140000?}, 0xc00267e000?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0028137d0?, 0x594064?, 0xc002822301?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3120 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e560, 0xc000159c70}, {0x3631c00, 0xc0003fee20}, 0x1, 0x0, 0xc0012cfc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e560?, 0xc00045c310?}, 0x3b9aca00, 0xc0012cfe10?, 0x1, 0xc0012cfc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e560, 0xc00045c310}, 0xc002184000, {0xc002902120, 0x16}, {0x2678d77, 0x14}, {0x269081e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x363e560, 0xc00045c310}, 0xc002184000, {0xc002902120, 0x16}, {0x266a25b?, 0xc000095f60?}, {0x552353?, 0x4a26cf?}, {0xc000204780, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002184000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002184000, 0xc00203a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2279
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2584 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e560, 0xc0004da770}, {0x3631c00, 0xc002866240}, 0x1, 0x0, 0xc0012cbc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e560?, 0xc000882000?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e560, 0xc000882000}, 0xc0000f8680, {0xc00005e2d0, 0x12}, {0x2678d77, 0x14}, {0x269081e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x363e560, 0xc000882000}, 0xc0000f8680, {0xc00005e2d0, 0x12}, {0x2660354?, 0xc00280e760?}, {0x552353?, 0x4a26cf?}, {0xc000a1d800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000f8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000f8680, 0xc000742000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2318
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2683 [IO wait]:
internal/poll.runtime_pollWait(0x7fcdcc450448, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00203b200?, 0xc00232c000?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00203b200, {0xc00232c000, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00203b200, {0xc00232c000?, 0xc0006e2b40?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000a08890, {0xc00232c000?, 0xc00232c05f?, 0x6f?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00273b680, {0xc00232c000?, 0x0?, 0xc00273b680?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0020982b0, {0x361b060, 0xc00273b680})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc002098008, {0x7fcdccc9a130, 0xc0022caeb8}, 0xc00134d980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc002098008, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc002098008, {0xc002353000, 0x1000, 0xc000583340?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00202b020, {0xc0001c77e0, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3619520, 0xc00202b020}, {0xc0001c77e0, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0001c77e0, 0x9, 0x134ddc0?}, {0x3619520?, 0xc00202b020?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001c77a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00134dfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00217cc00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2682
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 2442 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2441
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1973 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00003d520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00003d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00003d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00003d520, 0xc0001c2a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1717 [chan receive, 21 minutes]:
testing.(*T).Run(0xc00003d1e0, {0x2653086?, 0x552353?}, 0x30c0048)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00003d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00003d1e0, 0x30bfe70)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1694 [chan receive, 21 minutes]:
testing.(*T).Run(0xc00003c340, {0x2653086?, 0x55249c?}, 0xc0020f0d50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00003c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00003c340, 0x30bfe28)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1889 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00267e680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00267e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00267e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00267e680, 0xc000896700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1971 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00003c9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00003c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00003c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00003c9c0, 0xc0001c2980)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2349 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002072450, 0x3)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002a25140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002072480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021ca3b0, {0x361a8a0, 0xc002d6e1b0}, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021ca3b0, 0x3b9aca00, 0x0, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1854 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00239e9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00239e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00239e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00239e9c0, 0xc0028d2200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1852
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2424 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002695320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3154 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0028d3580, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3120
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1972 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00003d380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00003d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00003d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00003d380, 0xc0001c2a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1907 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007276c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007276c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007276c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007276c0, 0xc0006d8380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3170 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0028d3550, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022fede0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0028d3580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012f6330, {0x361a8a0, 0xc0024be150}, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012f6330, 0x3b9aca00, 0x0, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1856 [chan receive, 19 minutes]:
testing.(*T).Run(0xc00239ed00, {0x2654613?, 0x0?}, 0xc00052b080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00239ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00239ed00, 0xc0028d2280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1852
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2352 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00267ed00, {0x267ea8a?, 0x60400000004?}, 0xc00203a500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00267ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00267ed00, 0xc00052b680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1855
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1890 [chan receive, 17 minutes]:
testing.(*T).Run(0xc00239f1e0, {0x2654613?, 0x0?}, 0xc000742080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00239f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00239f1e0, 0xc0028d2340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1852
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2351 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2350
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2441 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e720, 0xc000140000}, 0xc00280c750, 0xc00280c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e720, 0xc000140000}, 0x1c?, 0xc00280c750, 0xc00280c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e720?, 0xc000140000?}, 0xc00267e9c0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00280c7d0?, 0x594064?, 0xc000896000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2425
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2050 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006fe9b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00267e820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00267e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00267e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00267e820, 0xc000896780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1906
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2440 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0021fad90, 0x3)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002695200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021fadc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002b6020, {0x361a8a0, 0xc0026e4030}, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002b6020, 0x3b9aca00, 0x0, 0x1, 0xc000140000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2425
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3171 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e720, 0xc000140000}, 0xc000504750, 0xc000504798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e720, 0xc000140000}, 0x12?, 0xc000504750, 0xc000504798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e720?, 0xc000140000?}, 0xc0005047b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99de1b?, 0xc002240d80?, 0xc000742200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2539 [IO wait]:
internal/poll.runtime_pollWait(0x7fcdcc450258, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000743180?, 0xc002180800?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000743180, {0xc002180800, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000743180, {0xc002180800?, 0x7fcdcc26b548?, 0xc00273b5c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00011c008, {0xc002180800?, 0xc001306938?, 0x41567b?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00273b5c0, {0xc002180800?, 0x0?, 0xc00273b5c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0023a69b0, {0x361b060, 0xc00273b5c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0023a6708, {0x361a420, 0xc00011c008}, 0xc001306980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0023a6708, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0023a6708, {0xc00238e000, 0x1000, 0xc000583340?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc002a258c0, {0xc00207a200, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3619520, 0xc002a258c0}, {0xc00207a200, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00207a200, 0x9, 0x1306dc0?}, {0x3619520?, 0xc002a258c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00207a1c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001306fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000204a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2538
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                    

Test pass (164/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.0/json-events 15.52
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 153.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
28 TestCertOptions 53.28
29 TestCertExpiration 289.89
31 TestForceSystemdFlag 52.02
32 TestForceSystemdEnv 73.57
34 TestKVMDriverInstallOrUpdate 4.09
38 TestErrorSpam/setup 44.18
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.77
41 TestErrorSpam/pause 1.61
42 TestErrorSpam/unpause 1.63
43 TestErrorSpam/stop 6.1
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 59.43
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 49.71
50 TestFunctional/serial/KubeContext 0.05
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.86
55 TestFunctional/serial/CacheCmd/cache/add_local 2.16
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
60 TestFunctional/serial/CacheCmd/cache/delete 0.12
61 TestFunctional/serial/MinikubeKubectlCmd 0.12
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
63 TestFunctional/serial/ExtraConfig 54.87
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.5
66 TestFunctional/serial/LogsFileCmd 1.47
67 TestFunctional/serial/InvalidService 4.24
69 TestFunctional/parallel/ConfigCmd 0.42
70 TestFunctional/parallel/DashboardCmd 17.22
71 TestFunctional/parallel/DryRun 0.33
72 TestFunctional/parallel/InternationalLanguage 0.17
73 TestFunctional/parallel/StatusCmd 1.19
77 TestFunctional/parallel/ServiceCmdConnect 8.63
78 TestFunctional/parallel/AddonsCmd 0.16
79 TestFunctional/parallel/PersistentVolumeClaim 50.19
81 TestFunctional/parallel/SSHCmd 0.6
82 TestFunctional/parallel/CpCmd 1.44
83 TestFunctional/parallel/MySQL 32.23
84 TestFunctional/parallel/FileSync 0.26
85 TestFunctional/parallel/CertSync 1.79
89 TestFunctional/parallel/NodeLabels 0.09
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
93 TestFunctional/parallel/License 0.56
94 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
95 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
96 TestFunctional/parallel/ProfileCmd/profile_list 0.33
97 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
98 TestFunctional/parallel/MountCmd/any-port 10.73
99 TestFunctional/parallel/ServiceCmd/List 0.47
100 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
101 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
102 TestFunctional/parallel/MountCmd/specific-port 2.03
103 TestFunctional/parallel/ServiceCmd/Format 0.29
104 TestFunctional/parallel/ServiceCmd/URL 0.3
105 TestFunctional/parallel/Version/short 0.08
106 TestFunctional/parallel/Version/components 0.91
107 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
108 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
109 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
110 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
111 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
112 TestFunctional/parallel/ImageCommands/Setup 1.95
113 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.9
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.19
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.43
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.08
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 213.22
140 TestMultiControlPlane/serial/DeployApp 6.83
141 TestMultiControlPlane/serial/PingHostFromPods 1.4
142 TestMultiControlPlane/serial/AddWorkerNode 48.68
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
145 TestMultiControlPlane/serial/CopyFile 14.02
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.54
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
154 TestMultiControlPlane/serial/RestartCluster 328.12
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
156 TestMultiControlPlane/serial/AddSecondaryNode 78.14
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
161 TestJSONOutput/start/Command 62.6
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.74
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.65
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.38
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.22
189 TestMainNoArgs 0.06
190 TestMinikubeProfile 90.02
193 TestMountStart/serial/StartWithMountFirst 28.4
194 TestMountStart/serial/VerifyMountFirst 0.4
195 TestMountStart/serial/StartWithMountSecond 26.63
196 TestMountStart/serial/VerifyMountSecond 0.4
197 TestMountStart/serial/DeleteFirst 0.89
198 TestMountStart/serial/VerifyMountPostDelete 0.41
199 TestMountStart/serial/Stop 1.32
200 TestMountStart/serial/RestartStopped 26.03
201 TestMountStart/serial/VerifyMountPostStop 0.41
204 TestMultiNode/serial/FreshStart2Nodes 102.42
205 TestMultiNode/serial/DeployApp2Nodes 5.51
206 TestMultiNode/serial/PingHostFrom2Pods 0.89
207 TestMultiNode/serial/AddNode 41.71
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.24
210 TestMultiNode/serial/CopyFile 7.66
211 TestMultiNode/serial/StopNode 2.51
212 TestMultiNode/serial/StartAfterStop 28.37
214 TestMultiNode/serial/DeleteNode 2.45
216 TestMultiNode/serial/RestartMultiNode 179.63
217 TestMultiNode/serial/ValidateNameConflict 45.17
224 TestScheduledStopUnix 115.76
228 TestRunningBinaryUpgrade 116.62
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
234 TestNoKubernetes/serial/StartWithK8s 126.28
235 TestNoKubernetes/serial/StartWithStopK8s 20.95
236 TestNoKubernetes/serial/Start 27.63
237 TestStoppedBinaryUpgrade/Setup 2.31
238 TestStoppedBinaryUpgrade/Upgrade 142.23
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
240 TestNoKubernetes/serial/ProfileList 1.03
241 TestNoKubernetes/serial/Stop 1.35
242 TestNoKubernetes/serial/StartNoArgs 42.59
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
252 TestPause/serial/Start 102.73
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
x
+
TestDownloadOnly/v1.20.0/json-events (27.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-904084 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-904084 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.444370238s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-904084
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-904084: exit status 85 (78.059445ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-904084 | jenkins | v1.33.0-beta.0 | 19 Apr 24 19:17 UTC |          |
	|         | -p download-only-904084        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 19:17:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 19:17:29.310838  374010 out.go:291] Setting OutFile to fd 1 ...
	I0419 19:17:29.311044  374010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:17:29.311057  374010 out.go:304] Setting ErrFile to fd 2...
	I0419 19:17:29.311063  374010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:17:29.311277  374010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	W0419 19:17:29.311410  374010 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18669-366597/.minikube/config/config.json: open /home/jenkins/minikube-integration/18669-366597/.minikube/config/config.json: no such file or directory
	I0419 19:17:29.312013  374010 out.go:298] Setting JSON to true
	I0419 19:17:29.313107  374010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3595,"bootTime":1713550654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 19:17:29.313178  374010 start.go:139] virtualization: kvm guest
	I0419 19:17:29.316045  374010 out.go:97] [download-only-904084] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 19:17:29.317795  374010 out.go:169] MINIKUBE_LOCATION=18669
	W0419 19:17:29.316183  374010 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball: no such file or directory
	I0419 19:17:29.316256  374010 notify.go:220] Checking for updates...
	I0419 19:17:29.320733  374010 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 19:17:29.322192  374010 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 19:17:29.323778  374010 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 19:17:29.325105  374010 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0419 19:17:29.327787  374010 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 19:17:29.328085  374010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 19:17:29.360792  374010 out.go:97] Using the kvm2 driver based on user configuration
	I0419 19:17:29.360818  374010 start.go:297] selected driver: kvm2
	I0419 19:17:29.360829  374010 start.go:901] validating driver "kvm2" against <nil>
	I0419 19:17:29.361221  374010 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:17:29.361328  374010 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 19:17:29.377024  374010 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 19:17:29.377086  374010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 19:17:29.377566  374010 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0419 19:17:29.377745  374010 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 19:17:29.377825  374010 cni.go:84] Creating CNI manager for ""
	I0419 19:17:29.377843  374010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 19:17:29.377855  374010 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 19:17:29.377944  374010 start.go:340] cluster config:
	{Name:download-only-904084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-904084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:17:29.378132  374010 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:17:29.379947  374010 out.go:97] Downloading VM boot image ...
	I0419 19:17:29.379981  374010 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0419 19:17:38.100554  374010 out.go:97] Starting "download-only-904084" primary control-plane node in "download-only-904084" cluster
	I0419 19:17:38.100594  374010 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 19:17:38.199430  374010 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0419 19:17:38.199479  374010 cache.go:56] Caching tarball of preloaded images
	I0419 19:17:38.199672  374010 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 19:17:38.201453  374010 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0419 19:17:38.201476  374010 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:17:38.304835  374010 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0419 19:17:50.832700  374010 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:17:50.832820  374010 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:17:51.737754  374010 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0419 19:17:51.738137  374010 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/download-only-904084/config.json ...
	I0419 19:17:51.738169  374010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/download-only-904084/config.json: {Name:mkf4a59927135e6b4b03daa528827999750360c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:17:51.738328  374010 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0419 19:17:51.738520  374010 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-904084 host does not exist
	  To start a cluster, run: "minikube start -p download-only-904084"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-904084
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (15.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-940183 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-940183 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.519880419s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (15.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-940183
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-940183: exit status 85 (74.087441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-904084 | jenkins | v1.33.0-beta.0 | 19 Apr 24 19:17 UTC |                     |
	|         | -p download-only-904084        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 19:17 UTC | 19 Apr 24 19:17 UTC |
	| delete  | -p download-only-904084        | download-only-904084 | jenkins | v1.33.0-beta.0 | 19 Apr 24 19:17 UTC | 19 Apr 24 19:17 UTC |
	| start   | -o=json --download-only        | download-only-940183 | jenkins | v1.33.0-beta.0 | 19 Apr 24 19:17 UTC |                     |
	|         | -p download-only-940183        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 19:17:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 19:17:57.123132  374699 out.go:291] Setting OutFile to fd 1 ...
	I0419 19:17:57.123427  374699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:17:57.123438  374699 out.go:304] Setting ErrFile to fd 2...
	I0419 19:17:57.123443  374699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 19:17:57.123656  374699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 19:17:57.124241  374699 out.go:298] Setting JSON to true
	I0419 19:17:57.125318  374699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3623,"bootTime":1713550654,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 19:17:57.125389  374699 start.go:139] virtualization: kvm guest
	I0419 19:17:57.127527  374699 out.go:97] [download-only-940183] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 19:17:57.128946  374699 out.go:169] MINIKUBE_LOCATION=18669
	I0419 19:17:57.127726  374699 notify.go:220] Checking for updates...
	I0419 19:17:57.131755  374699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 19:17:57.133255  374699 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 19:17:57.134619  374699 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 19:17:57.135883  374699 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0419 19:17:57.138113  374699 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 19:17:57.138343  374699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 19:17:57.170456  374699 out.go:97] Using the kvm2 driver based on user configuration
	I0419 19:17:57.170494  374699 start.go:297] selected driver: kvm2
	I0419 19:17:57.170501  374699 start.go:901] validating driver "kvm2" against <nil>
	I0419 19:17:57.170801  374699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:17:57.170877  374699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18669-366597/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0419 19:17:57.185584  374699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0419 19:17:57.185633  374699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 19:17:57.186134  374699 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0419 19:17:57.186274  374699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 19:17:57.186328  374699 cni.go:84] Creating CNI manager for ""
	I0419 19:17:57.186341  374699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0419 19:17:57.186351  374699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 19:17:57.186412  374699 start.go:340] cluster config:
	{Name:download-only-940183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-940183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 19:17:57.186504  374699 iso.go:125] acquiring lock: {Name:mk1ad6ddf35ee01c38d2d19d718e87f137956a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 19:17:57.188403  374699 out.go:97] Starting "download-only-940183" primary control-plane node in "download-only-940183" cluster
	I0419 19:17:57.188426  374699 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 19:17:57.363283  374699 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 19:17:57.363335  374699 cache.go:56] Caching tarball of preloaded images
	I0419 19:17:57.363491  374699 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 19:17:57.365435  374699 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0419 19:17:57.365466  374699 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:17:57.541506  374699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0419 19:18:10.967024  374699 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:18:10.967128  374699 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18669-366597/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0419 19:18:11.718373  374699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0419 19:18:11.718725  374699 profile.go:143] Saving config to /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/download-only-940183/config.json ...
	I0419 19:18:11.718754  374699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/download-only-940183/config.json: {Name:mke999291ae9154a619033bf19db8be15799c71b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 19:18:11.718908  374699 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0419 19:18:11.719035  374699 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18669-366597/.minikube/cache/linux/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-940183 host does not exist
	  To start a cluster, run: "minikube start -p download-only-940183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-940183
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-464950 --alsologtostderr --binary-mirror http://127.0.0.1:34713 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-464950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-464950
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (153.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-102119 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-102119 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m32.569275723s)
helpers_test.go:175: Cleaning up "offline-crio-102119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-102119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-102119: (1.041522414s)
--- PASS: TestOffline (153.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-310054
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-310054: exit status 85 (63.755434ms)

                                                
                                                
-- stdout --
	* Profile "addons-310054" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310054"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-310054
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-310054: exit status 85 (64.71707ms)

                                                
                                                
-- stdout --
	* Profile "addons-310054" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310054"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestCertOptions (53.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-465658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-465658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (51.752548848s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-465658 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-465658 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-465658 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-465658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-465658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-465658: (1.064892737s)
--- PASS: TestCertOptions (53.28s)

                                                
                                    
x
+
TestCertExpiration (289.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-198159 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-198159 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.76590185s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-198159 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-198159 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m3.61325551s)
helpers_test.go:175: Cleaning up "cert-expiration-198159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-198159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-198159: (1.508039837s)
--- PASS: TestCertExpiration (289.89s)

                                                
                                    
x
+
TestForceSystemdFlag (52.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-725675 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-725675 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.76954535s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-725675 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-725675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-725675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-725675: (1.0303901s)
--- PASS: TestForceSystemdFlag (52.02s)

                                                
                                    
x
+
TestForceSystemdEnv (73.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-104265 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-104265 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.565246181s)
helpers_test.go:175: Cleaning up "force-systemd-env-104265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-104265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-104265: (1.003509086s)
--- PASS: TestForceSystemdEnv (73.57s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.09s)

                                                
                                    
x
+
TestErrorSpam/setup (44.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-588696 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-588696 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-588696 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-588696 --driver=kvm2  --container-runtime=crio: (44.178815013s)
--- PASS: TestErrorSpam/setup (44.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop: (2.301275866s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop: (1.970689323s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-588696 --log_dir /tmp/nospam-588696 stop: (1.826309168s)
--- PASS: TestErrorSpam/stop (6.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18669-366597/.minikube/files/etc/test/nested/copy/373998/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-410415 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.430583146s)
--- PASS: TestFunctional/serial/StartWithProxy (59.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-410415 --alsologtostderr -v=8: (49.708233537s)
functional_test.go:659: soft start took 49.709211249s for "functional-410415" cluster.
--- PASS: TestFunctional/serial/SoftStart (49.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-410415 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:3.1: (1.2966414s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:3.3: (1.241488284s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 cache add registry.k8s.io/pause:latest: (1.32052655s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-410415 /tmp/TestFunctionalserialCacheCmdcacheadd_local3720391144/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache add minikube-local-cache-test:functional-410415
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 cache add minikube-local-cache-test:functional-410415: (1.771517447s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache delete minikube-local-cache-test:functional-410415
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-410415
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (234.09631ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 kubectl -- --context functional-410415 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-410415 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-410415 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.866375105s)
functional_test.go:757: restart took 54.866514307s for "functional-410415" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (54.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-410415 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 logs: (1.495606895s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 logs --file /tmp/TestFunctionalserialLogsFileCmd3030906641/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 logs --file /tmp/TestFunctionalserialLogsFileCmd3030906641/001/logs.txt: (1.465116037s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-410415 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-410415
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-410415: exit status 115 (295.189399ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.15:32359 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-410415 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 config get cpus: exit status 14 (81.332054ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 config get cpus: exit status 14 (61.48685ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410415 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410415 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 386717: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410415 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.596224ms)

                                                
                                                
-- stdout --
	* [functional-410415] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:02:13.084783  386577 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:02:13.085135  386577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:02:13.085151  386577 out.go:304] Setting ErrFile to fd 2...
	I0419 20:02:13.085158  386577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:02:13.085473  386577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:02:13.086243  386577 out.go:298] Setting JSON to false
	I0419 20:02:13.087745  386577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6279,"bootTime":1713550654,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:02:13.087843  386577 start.go:139] virtualization: kvm guest
	I0419 20:02:13.090445  386577 out.go:177] * [functional-410415] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0419 20:02:13.092286  386577 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:02:13.092258  386577 notify.go:220] Checking for updates...
	I0419 20:02:13.093608  386577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:02:13.094897  386577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:02:13.096246  386577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:02:13.097541  386577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:02:13.098863  386577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:02:13.100745  386577 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:02:13.101423  386577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:02:13.101482  386577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:02:13.118558  386577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0419 20:02:13.119015  386577 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:02:13.119668  386577 main.go:141] libmachine: Using API Version  1
	I0419 20:02:13.119697  386577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:02:13.120076  386577 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:02:13.120322  386577 main.go:141] libmachine: (functional-410415) Calling .DriverName
	I0419 20:02:13.120581  386577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:02:13.120902  386577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:02:13.120939  386577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:02:13.135810  386577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0419 20:02:13.136325  386577 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:02:13.136908  386577 main.go:141] libmachine: Using API Version  1
	I0419 20:02:13.136937  386577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:02:13.137311  386577 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:02:13.137549  386577 main.go:141] libmachine: (functional-410415) Calling .DriverName
	I0419 20:02:13.173446  386577 out.go:177] * Using the kvm2 driver based on existing profile
	I0419 20:02:13.174771  386577 start.go:297] selected driver: kvm2
	I0419 20:02:13.174791  386577 start.go:901] validating driver "kvm2" against &{Name:functional-410415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-410415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:02:13.174965  386577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:02:13.177376  386577 out.go:177] 
	W0419 20:02:13.178841  386577 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0419 20:02:13.180118  386577 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410415 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410415 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.491813ms)

                                                
                                                
-- stdout --
	* [functional-410415] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:02:12.923845  386520 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:02:12.924198  386520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:02:12.924210  386520 out.go:304] Setting ErrFile to fd 2...
	I0419 20:02:12.924217  386520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:02:12.924523  386520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:02:12.925156  386520 out.go:298] Setting JSON to false
	I0419 20:02:12.926289  386520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6279,"bootTime":1713550654,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0419 20:02:12.926359  386520 start.go:139] virtualization: kvm guest
	I0419 20:02:12.928841  386520 out.go:177] * [functional-410415] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0419 20:02:12.931014  386520 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 20:02:12.930974  386520 notify.go:220] Checking for updates...
	I0419 20:02:12.932467  386520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 20:02:12.933939  386520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	I0419 20:02:12.935453  386520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	I0419 20:02:12.937265  386520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0419 20:02:12.938915  386520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 20:02:12.941085  386520 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:02:12.941486  386520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:02:12.941536  386520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:02:12.958071  386520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0419 20:02:12.958461  386520 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:02:12.959134  386520 main.go:141] libmachine: Using API Version  1
	I0419 20:02:12.959160  386520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:02:12.959519  386520 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:02:12.959698  386520 main.go:141] libmachine: (functional-410415) Calling .DriverName
	I0419 20:02:12.959956  386520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 20:02:12.960309  386520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:02:12.960353  386520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:02:12.976310  386520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0419 20:02:12.976850  386520 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:02:12.977401  386520 main.go:141] libmachine: Using API Version  1
	I0419 20:02:12.977427  386520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:02:12.977848  386520 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:02:12.978052  386520 main.go:141] libmachine: (functional-410415) Calling .DriverName
	I0419 20:02:13.012460  386520 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0419 20:02:13.013851  386520 start.go:297] selected driver: kvm2
	I0419 20:02:13.013873  386520 start.go:901] validating driver "kvm2" against &{Name:functional-410415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-410415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 20:02:13.014038  386520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 20:02:13.017735  386520 out.go:177] 
	W0419 20:02:13.019331  386520 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0419 20:02:13.021621  386520 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-410415 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-410415 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-m4dhz" [a6fb43e3-b2e4-4adc-be3f-c84f16de67d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-m4dhz" [a6fb43e3-b2e4-4adc-be3f-c84f16de67d9] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008618664s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.15:30133
functional_test.go:1671: http://192.168.39.15:30133: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-m4dhz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.15:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.15:30133
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9359ce4f-708d-41a7-a8a0-a9e68f08ef92] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025955017s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-410415 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-410415 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410415 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410415 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410415 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4d41ac6-8929-47cd-b20e-7e83711767c2] Pending
helpers_test.go:344: "sp-pod" [a4d41ac6-8929-47cd-b20e-7e83711767c2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a4d41ac6-8929-47cd-b20e-7e83711767c2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004327851s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-410415 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-410415 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-410415 delete -f testdata/storage-provisioner/pod.yaml: (3.37258917s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410415 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [42af29b2-02be-4e3e-bb4c-9f87582bfc59] Pending
helpers_test.go:344: "sp-pod" [42af29b2-02be-4e3e-bb4c-9f87582bfc59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [42af29b2-02be-4e3e-bb4c-9f87582bfc59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004615945s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-410415 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh -n functional-410415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cp functional-410415:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2058164271/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh -n functional-410415 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh -n functional-410415 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-410415 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-p6hdn" [ba1e10b6-1ba4-4a25-b308-ab888fbe7d17] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2024/04/19 20:02:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-64454c8b5c-p6hdn" [ba1e10b6-1ba4-4a25-b308-ab888fbe7d17] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.004195907s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410415 exec mysql-64454c8b5c-p6hdn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-410415 exec mysql-64454c8b5c-p6hdn -- mysql -ppassword -e "show databases;": exit status 1 (155.80526ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410415 exec mysql-64454c8b5c-p6hdn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-410415 exec mysql-64454c8b5c-p6hdn -- mysql -ppassword -e "show databases;": exit status 1 (152.588311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410415 exec mysql-64454c8b5c-p6hdn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/373998/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /etc/test/nested/copy/373998/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/373998.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /etc/ssl/certs/373998.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/373998.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /usr/share/ca-certificates/373998.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3739982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /etc/ssl/certs/3739982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3739982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /usr/share/ca-certificates/3739982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-410415 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "sudo systemctl is-active docker": exit status 1 (275.530244ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "sudo systemctl is-active containerd": exit status 1 (253.753738ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-410415 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-410415 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-t7s5r" [f12dcbc0-2d56-4034-933d-f83fdd01bfe4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-t7s5r" [f12dcbc0-2d56-4034-933d-f83fdd01bfe4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005328491s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "258.057258ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "70.266503ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "263.451452ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "74.280808ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdany-port3375928982/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713556931659589480" to /tmp/TestFunctionalparallelMountCmdany-port3375928982/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713556931659589480" to /tmp/TestFunctionalparallelMountCmdany-port3375928982/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713556931659589480" to /tmp/TestFunctionalparallelMountCmdany-port3375928982/001/test-1713556931659589480
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.586922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 19 20:02 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 19 20:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 19 20:02 test-1713556931659589480
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh cat /mount-9p/test-1713556931659589480
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-410415 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [27604ffd-4f93-4bdd-91f2-78635f852cb0] Pending
helpers_test.go:344: "busybox-mount" [27604ffd-4f93-4bdd-91f2-78635f852cb0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [27604ffd-4f93-4bdd-91f2-78635f852cb0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [27604ffd-4f93-4bdd-91f2-78635f852cb0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004973206s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-410415 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdany-port3375928982/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service list -o json
functional_test.go:1490: Took "456.587717ms" to run "out/minikube-linux-amd64 -p functional-410415 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.15:30802
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdspecific-port387780844/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.437392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdspecific-port387780844/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "sudo umount -f /mount-9p": exit status 1 (248.315926ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-410415 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdspecific-port387780844/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.15:30802
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410415 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-410415
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-410415
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410415 image ls --format short --alsologtostderr:
I0419 20:02:57.607684  388477 out.go:291] Setting OutFile to fd 1 ...
I0419 20:02:57.607856  388477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.607870  388477 out.go:304] Setting ErrFile to fd 2...
I0419 20:02:57.607876  388477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.608233  388477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
I0419 20:02:57.609245  388477 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.609413  388477 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.610046  388477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.610119  388477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.628136  388477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
I0419 20:02:57.628718  388477 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.629290  388477 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.629318  388477 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.629721  388477 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.629958  388477 main.go:141] libmachine: (functional-410415) Calling .GetState
I0419 20:02:57.631674  388477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.631731  388477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.649239  388477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
I0419 20:02:57.649714  388477 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.650199  388477 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.650219  388477 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.650642  388477 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.650793  388477 main.go:141] libmachine: (functional-410415) Calling .DriverName
I0419 20:02:57.650963  388477 ssh_runner.go:195] Run: systemctl --version
I0419 20:02:57.650992  388477 main.go:141] libmachine: (functional-410415) Calling .GetSSHHostname
I0419 20:02:57.654168  388477 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.654545  388477 main.go:141] libmachine: (functional-410415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c5:ff", ip: ""} in network mk-functional-410415: {Iface:virbr1 ExpiryTime:2024-04-19 20:59:25 +0000 UTC Type:0 Mac:52:54:00:1d:c5:ff Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-410415 Clientid:01:52:54:00:1d:c5:ff}
I0419 20:02:57.654630  388477 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined IP address 192.168.39.15 and MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.654740  388477 main.go:141] libmachine: (functional-410415) Calling .GetSSHPort
I0419 20:02:57.654939  388477 main.go:141] libmachine: (functional-410415) Calling .GetSSHKeyPath
I0419 20:02:57.655167  388477 main.go:141] libmachine: (functional-410415) Calling .GetSSHUsername
I0419 20:02:57.655291  388477 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/functional-410415/id_rsa Username:docker}
I0419 20:02:57.739992  388477 ssh_runner.go:195] Run: sudo crictl images --output json
I0419 20:02:57.835602  388477 main.go:141] libmachine: Making call to close driver server
I0419 20:02:57.835614  388477 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:57.835920  388477 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:57.835938  388477 main.go:141] libmachine: Making call to close connection to plugin binary
I0419 20:02:57.835960  388477 main.go:141] libmachine: Making call to close driver server
I0419 20:02:57.835968  388477 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:57.836185  388477 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:57.836199  388477 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410415 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| localhost/minikube-local-cache-test     | functional-410415  | 2b6f8840ec015 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 2ac752d7aeb1d | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/google-containers/addon-resizer  | functional-410415  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410415 image ls --format table --alsologtostderr:
I0419 20:02:58.165186  388589 out.go:291] Setting OutFile to fd 1 ...
I0419 20:02:58.165317  388589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:58.165328  388589 out.go:304] Setting ErrFile to fd 2...
I0419 20:02:58.165332  388589 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:58.165558  388589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
I0419 20:02:58.166207  388589 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:58.166317  388589 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:58.166732  388589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:58.166782  388589 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:58.182195  388589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
I0419 20:02:58.182702  388589 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:58.183326  388589 main.go:141] libmachine: Using API Version  1
I0419 20:02:58.183351  388589 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:58.183672  388589 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:58.183867  388589 main.go:141] libmachine: (functional-410415) Calling .GetState
I0419 20:02:58.185600  388589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:58.185669  388589 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:58.200419  388589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
I0419 20:02:58.200907  388589 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:58.201419  388589 main.go:141] libmachine: Using API Version  1
I0419 20:02:58.201448  388589 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:58.201834  388589 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:58.202042  388589 main.go:141] libmachine: (functional-410415) Calling .DriverName
I0419 20:02:58.202240  388589 ssh_runner.go:195] Run: systemctl --version
I0419 20:02:58.202267  388589 main.go:141] libmachine: (functional-410415) Calling .GetSSHHostname
I0419 20:02:58.205034  388589 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:58.205479  388589 main.go:141] libmachine: (functional-410415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c5:ff", ip: ""} in network mk-functional-410415: {Iface:virbr1 ExpiryTime:2024-04-19 20:59:25 +0000 UTC Type:0 Mac:52:54:00:1d:c5:ff Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-410415 Clientid:01:52:54:00:1d:c5:ff}
I0419 20:02:58.205510  388589 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined IP address 192.168.39.15 and MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:58.205579  388589 main.go:141] libmachine: (functional-410415) Calling .GetSSHPort
I0419 20:02:58.205752  388589 main.go:141] libmachine: (functional-410415) Calling .GetSSHKeyPath
I0419 20:02:58.205914  388589 main.go:141] libmachine: (functional-410415) Calling .GetSSHUsername
I0419 20:02:58.206050  388589 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/functional-410415/id_rsa Username:docker}
I0419 20:02:58.307234  388589 ssh_runner.go:195] Run: sudo crictl images --output json
I0419 20:02:58.419049  388589 main.go:141] libmachine: Making call to close driver server
I0419 20:02:58.419080  388589 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:58.419361  388589 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:58.419378  388589 main.go:141] libmachine: Making call to close connection to plugin binary
I0419 20:02:58.419394  388589 main.go:141] libmachine: Making call to close driver server
I0419 20:02:58.419401  388589 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:58.419633  388589 main.go:141] libmachine: (functional-410415) DBG | Closing plugin on server side
I0419 20:02:58.419709  388589 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:58.419753  388589 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410415 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":["docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419","docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5"],"repoTags":["docker.io/library/nginx:latest"],"size":"191703878"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s
-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2b6f8840ec01550b303f0bb5d822b2e915498945540a978a837c0d00b751648e","repoDigests":["localhost/minikube-local-cache-test@sha256:f70dc8effc621b9fe86788d9132d91f2a11e4aa531b27d79485b752b0e93e5c4"],"repoTags":["localhost/minikube-local-cache-test:functional-410415"],"size":"3330"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c
37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:l
atest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa1666
0934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s
.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-410415"],"size":"34114467"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube
-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410415 image ls --format json --alsologtostderr:
I0419 20:02:57.905461  388535 out.go:291] Setting OutFile to fd 1 ...
I0419 20:02:57.905743  388535 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.905755  388535 out.go:304] Setting ErrFile to fd 2...
I0419 20:02:57.905762  388535 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.905970  388535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
I0419 20:02:57.906599  388535 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.906713  388535 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.907101  388535 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.907151  388535 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.922439  388535 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
I0419 20:02:57.922921  388535 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.923498  388535 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.923523  388535 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.923908  388535 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.924157  388535 main.go:141] libmachine: (functional-410415) Calling .GetState
I0419 20:02:57.926231  388535 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.926274  388535 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.941419  388535 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
I0419 20:02:57.942063  388535 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.942577  388535 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.942595  388535 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.942965  388535 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.943165  388535 main.go:141] libmachine: (functional-410415) Calling .DriverName
I0419 20:02:57.943460  388535 ssh_runner.go:195] Run: systemctl --version
I0419 20:02:57.943507  388535 main.go:141] libmachine: (functional-410415) Calling .GetSSHHostname
I0419 20:02:57.947427  388535 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.947843  388535 main.go:141] libmachine: (functional-410415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c5:ff", ip: ""} in network mk-functional-410415: {Iface:virbr1 ExpiryTime:2024-04-19 20:59:25 +0000 UTC Type:0 Mac:52:54:00:1d:c5:ff Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-410415 Clientid:01:52:54:00:1d:c5:ff}
I0419 20:02:57.947878  388535 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined IP address 192.168.39.15 and MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.947949  388535 main.go:141] libmachine: (functional-410415) Calling .GetSSHPort
I0419 20:02:57.948122  388535 main.go:141] libmachine: (functional-410415) Calling .GetSSHKeyPath
I0419 20:02:57.948289  388535 main.go:141] libmachine: (functional-410415) Calling .GetSSHUsername
I0419 20:02:57.948458  388535 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/functional-410415/id_rsa Username:docker}
I0419 20:02:58.043827  388535 ssh_runner.go:195] Run: sudo crictl images --output json
I0419 20:02:58.092334  388535 main.go:141] libmachine: Making call to close driver server
I0419 20:02:58.092355  388535 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:58.092713  388535 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:58.092732  388535 main.go:141] libmachine: Making call to close connection to plugin binary
I0419 20:02:58.092741  388535 main.go:141] libmachine: Making call to close driver server
I0419 20:02:58.092749  388535 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:58.093025  388535 main.go:141] libmachine: (functional-410415) DBG | Closing plugin on server side
I0419 20:02:58.093057  388535 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:58.093067  388535 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410415 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-410415
size: "34114467"
- id: 2b6f8840ec01550b303f0bb5d822b2e915498945540a978a837c0d00b751648e
repoDigests:
- localhost/minikube-local-cache-test@sha256:f70dc8effc621b9fe86788d9132d91f2a11e4aa531b27d79485b752b0e93e5c4
repoTags:
- localhost/minikube-local-cache-test:functional-410415
size: "3330"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests:
- docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419
- docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5
repoTags:
- docker.io/library/nginx:latest
size: "191703878"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410415 image ls --format yaml --alsologtostderr:
I0419 20:02:57.612995  388478 out.go:291] Setting OutFile to fd 1 ...
I0419 20:02:57.613109  388478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.613121  388478 out.go:304] Setting ErrFile to fd 2...
I0419 20:02:57.613125  388478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:57.613344  388478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
I0419 20:02:57.614000  388478 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.614117  388478 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:57.614539  388478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.614585  388478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.634254  388478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46613
I0419 20:02:57.634757  388478 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.635430  388478 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.635451  388478 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.635898  388478 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.636161  388478 main.go:141] libmachine: (functional-410415) Calling .GetState
I0419 20:02:57.637949  388478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:57.637986  388478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:57.653530  388478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
I0419 20:02:57.654038  388478 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:57.654621  388478 main.go:141] libmachine: Using API Version  1
I0419 20:02:57.654642  388478 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:57.655100  388478 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:57.655300  388478 main.go:141] libmachine: (functional-410415) Calling .DriverName
I0419 20:02:57.655527  388478 ssh_runner.go:195] Run: systemctl --version
I0419 20:02:57.655559  388478 main.go:141] libmachine: (functional-410415) Calling .GetSSHHostname
I0419 20:02:57.658701  388478 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.659161  388478 main.go:141] libmachine: (functional-410415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c5:ff", ip: ""} in network mk-functional-410415: {Iface:virbr1 ExpiryTime:2024-04-19 20:59:25 +0000 UTC Type:0 Mac:52:54:00:1d:c5:ff Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-410415 Clientid:01:52:54:00:1d:c5:ff}
I0419 20:02:57.659201  388478 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined IP address 192.168.39.15 and MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:57.659307  388478 main.go:141] libmachine: (functional-410415) Calling .GetSSHPort
I0419 20:02:57.659470  388478 main.go:141] libmachine: (functional-410415) Calling .GetSSHKeyPath
I0419 20:02:57.659605  388478 main.go:141] libmachine: (functional-410415) Calling .GetSSHUsername
I0419 20:02:57.659722  388478 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/functional-410415/id_rsa Username:docker}
I0419 20:02:57.758535  388478 ssh_runner.go:195] Run: sudo crictl images --output json
I0419 20:02:57.815739  388478 main.go:141] libmachine: Making call to close driver server
I0419 20:02:57.815754  388478 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:57.816040  388478 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:57.816061  388478 main.go:141] libmachine: Making call to close connection to plugin binary
I0419 20:02:57.816076  388478 main.go:141] libmachine: (functional-410415) DBG | Closing plugin on server side
I0419 20:02:57.816080  388478 main.go:141] libmachine: Making call to close driver server
I0419 20:02:57.816113  388478 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:02:57.816440  388478 main.go:141] libmachine: (functional-410415) DBG | Closing plugin on server side
I0419 20:02:57.816533  388478 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:02:57.816572  388478 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh pgrep buildkitd: exit status 1 (222.586898ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image build -t localhost/my-image:functional-410415 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image build -t localhost/my-image:functional-410415 testdata/build --alsologtostderr: (3.166623597s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410415 image build -t localhost/my-image:functional-410415 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bbbe1b06b77
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-410415
--> 4cec718f3c1
Successfully tagged localhost/my-image:functional-410415
4cec718f3c112364c09b997ec6047e493536f677c9fa7a799ab105346e74bce2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410415 image build -t localhost/my-image:functional-410415 testdata/build --alsologtostderr:
I0419 20:02:58.107515  388577 out.go:291] Setting OutFile to fd 1 ...
I0419 20:02:58.107657  388577 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:58.107668  388577 out.go:304] Setting ErrFile to fd 2...
I0419 20:02:58.107673  388577 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 20:02:58.107861  388577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
I0419 20:02:58.108550  388577 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:58.109348  388577 config.go:182] Loaded profile config "functional-410415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0419 20:02:58.109717  388577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:58.109790  388577 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:58.126060  388577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
I0419 20:02:58.126661  388577 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:58.127203  388577 main.go:141] libmachine: Using API Version  1
I0419 20:02:58.127222  388577 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:58.127543  388577 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:58.127688  388577 main.go:141] libmachine: (functional-410415) Calling .GetState
I0419 20:02:58.130305  388577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0419 20:02:58.130344  388577 main.go:141] libmachine: Launching plugin server for driver kvm2
I0419 20:02:58.146008  388577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
I0419 20:02:58.146574  388577 main.go:141] libmachine: () Calling .GetVersion
I0419 20:02:58.147247  388577 main.go:141] libmachine: Using API Version  1
I0419 20:02:58.147273  388577 main.go:141] libmachine: () Calling .SetConfigRaw
I0419 20:02:58.147641  388577 main.go:141] libmachine: () Calling .GetMachineName
I0419 20:02:58.147920  388577 main.go:141] libmachine: (functional-410415) Calling .DriverName
I0419 20:02:58.148183  388577 ssh_runner.go:195] Run: systemctl --version
I0419 20:02:58.148209  388577 main.go:141] libmachine: (functional-410415) Calling .GetSSHHostname
I0419 20:02:58.151468  388577 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:58.151957  388577 main.go:141] libmachine: (functional-410415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c5:ff", ip: ""} in network mk-functional-410415: {Iface:virbr1 ExpiryTime:2024-04-19 20:59:25 +0000 UTC Type:0 Mac:52:54:00:1d:c5:ff Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-410415 Clientid:01:52:54:00:1d:c5:ff}
I0419 20:02:58.152076  388577 main.go:141] libmachine: (functional-410415) DBG | domain functional-410415 has defined IP address 192.168.39.15 and MAC address 52:54:00:1d:c5:ff in network mk-functional-410415
I0419 20:02:58.152353  388577 main.go:141] libmachine: (functional-410415) Calling .GetSSHPort
I0419 20:02:58.152519  388577 main.go:141] libmachine: (functional-410415) Calling .GetSSHKeyPath
I0419 20:02:58.152706  388577 main.go:141] libmachine: (functional-410415) Calling .GetSSHUsername
I0419 20:02:58.152832  388577 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/functional-410415/id_rsa Username:docker}
I0419 20:02:58.244689  388577 build_images.go:161] Building image from path: /tmp/build.2047710709.tar
I0419 20:02:58.244766  388577 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0419 20:02:58.258071  388577 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2047710709.tar
I0419 20:02:58.263324  388577 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2047710709.tar: stat -c "%s %y" /var/lib/minikube/build/build.2047710709.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2047710709.tar': No such file or directory
I0419 20:02:58.263369  388577 ssh_runner.go:362] scp /tmp/build.2047710709.tar --> /var/lib/minikube/build/build.2047710709.tar (3072 bytes)
I0419 20:02:58.300223  388577 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2047710709
I0419 20:02:58.322182  388577 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2047710709 -xf /var/lib/minikube/build/build.2047710709.tar
I0419 20:02:58.351279  388577 crio.go:315] Building image: /var/lib/minikube/build/build.2047710709
I0419 20:02:58.351379  388577 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-410415 /var/lib/minikube/build/build.2047710709 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0419 20:03:01.178642  388577 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-410415 /var/lib/minikube/build/build.2047710709 --cgroup-manager=cgroupfs: (2.827229411s)
I0419 20:03:01.178726  388577 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2047710709
I0419 20:03:01.191800  388577 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2047710709.tar
I0419 20:03:01.205449  388577 build_images.go:217] Built localhost/my-image:functional-410415 from /tmp/build.2047710709.tar
I0419 20:03:01.205502  388577 build_images.go:133] succeeded building to: functional-410415
I0419 20:03:01.205509  388577 build_images.go:134] failed building to: 
I0419 20:03:01.205542  388577 main.go:141] libmachine: Making call to close driver server
I0419 20:03:01.205558  388577 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:03:01.205858  388577 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:03:01.205880  388577 main.go:141] libmachine: Making call to close connection to plugin binary
I0419 20:03:01.205890  388577 main.go:141] libmachine: Making call to close driver server
I0419 20:03:01.205899  388577 main.go:141] libmachine: (functional-410415) Calling .Close
I0419 20:03:01.205899  388577 main.go:141] libmachine: (functional-410415) DBG | Closing plugin on server side
I0419 20:03:01.206126  388577 main.go:141] libmachine: Successfully made call to close driver server
I0419 20:03:01.206138  388577 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.926131051s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-410415
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T" /mount1: exit status 1 (288.487155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-410415 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410415 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248827860/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr: (4.648790032s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr: (4.943495111s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.795814148s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-410415
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image load --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr: (13.329884579s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image save gcr.io/google-containers/addon-resizer:functional-410415 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image save gcr.io/google-containers/addon-resizer:functional-410415 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.720154425s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image rm gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.833878089s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-410415
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-410415 image save --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-410415 image save --daemon gcr.io/google-containers/addon-resizer:functional-410415 --alsologtostderr: (1.298330118s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-410415
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-410415
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-410415
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-410415
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-423356 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-423356 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m32.526225191s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-423356 -- rollout status deployment/busybox: (4.331812751s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-4t8f9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-fq5c2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-wqfc4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-4t8f9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-fq5c2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-wqfc4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-4t8f9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-fq5c2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-wqfc4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-4t8f9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-4t8f9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-fq5c2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-fq5c2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-wqfc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-423356 -- exec busybox-fc5497c4f-wqfc4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-423356 -v=7 --alsologtostderr
E0419 20:07:10.228139  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.233967  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.245067  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.265468  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.306453  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.386902  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.547369  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:10.867871  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:11.508410  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:12.788602  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:15.349332  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:20.470319  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:07:30.711313  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-423356 -v=7 --alsologtostderr: (47.799335845s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-423356 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp testdata/cp-test.txt ha-423356:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356:/home/docker/cp-test.txt ha-423356-m02:/home/docker/cp-test_ha-423356_ha-423356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test_ha-423356_ha-423356-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356:/home/docker/cp-test.txt ha-423356-m03:/home/docker/cp-test_ha-423356_ha-423356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test_ha-423356_ha-423356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356:/home/docker/cp-test.txt ha-423356-m04:/home/docker/cp-test_ha-423356_ha-423356-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test_ha-423356_ha-423356-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp testdata/cp-test.txt ha-423356-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m02:/home/docker/cp-test.txt ha-423356:/home/docker/cp-test_ha-423356-m02_ha-423356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test_ha-423356-m02_ha-423356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m02:/home/docker/cp-test.txt ha-423356-m03:/home/docker/cp-test_ha-423356-m02_ha-423356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test_ha-423356-m02_ha-423356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m02:/home/docker/cp-test.txt ha-423356-m04:/home/docker/cp-test_ha-423356-m02_ha-423356-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test_ha-423356-m02_ha-423356-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp testdata/cp-test.txt ha-423356-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt ha-423356:/home/docker/cp-test_ha-423356-m03_ha-423356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test_ha-423356-m03_ha-423356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt ha-423356-m02:/home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test_ha-423356-m03_ha-423356-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m03:/home/docker/cp-test.txt ha-423356-m04:/home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test_ha-423356-m03_ha-423356-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp testdata/cp-test.txt ha-423356-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3874234121/001/cp-test_ha-423356-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt ha-423356:/home/docker/cp-test_ha-423356-m04_ha-423356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356 "sudo cat /home/docker/cp-test_ha-423356-m04_ha-423356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt ha-423356-m02:/home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m02 "sudo cat /home/docker/cp-test_ha-423356-m04_ha-423356-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 cp ha-423356-m04:/home/docker/cp-test.txt ha-423356-m03:/home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 ssh -n ha-423356-m03 "sudo cat /home/docker/cp-test_ha-423356-m04_ha-423356-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.512427268s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-423356 node delete m03 -v=7 --alsologtostderr: (16.751625033s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (328.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-423356 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0419 20:22:10.228001  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
E0419 20:23:33.275422  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-423356 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m27.206926365s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (328.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-423356 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-423356 --control-plane -v=7 --alsologtostderr: (1m17.259229205s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-423356 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-268970 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0419 20:27:10.227676  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-268970 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.596061291s)
--- PASS: TestJSONOutput/start/Command (62.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-268970 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-268970 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-268970 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-268970 --output=json --user=testUser: (7.378492831s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-060422 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-060422 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.416957ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c9f709ce-f89e-4984-a96e-ce9e845be7fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-060422] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"101f501f-ac89-4e8d-83b2-5ede9c8a68bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18669"}}
	{"specversion":"1.0","id":"3d0c692d-8a69-4f1a-901a-bdb1f3dc0a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1751038f-711d-443e-94a7-2ea72556e0f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig"}}
	{"specversion":"1.0","id":"8e8e9c33-ff28-4c62-9275-699953bcee80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube"}}
	{"specversion":"1.0","id":"92b88ef4-0af6-46e9-a666-50adad3a9288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"290df07f-8f55-4144-bbe7-d6e6ad5ab907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1fa52ab-e1d6-4fbf-86d4-cae2a212a237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-060422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-060422
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (90.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-101522 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-101522 --driver=kvm2  --container-runtime=crio: (43.406923754s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-105211 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-105211 --driver=kvm2  --container-runtime=crio: (44.047199635s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-101522
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-105211
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-105211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-105211
helpers_test.go:175: Cleaning up "first-101522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-101522
--- PASS: TestMinikubeProfile (90.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-187266 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-187266 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.401766617s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-187266 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-187266 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-207200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-207200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.632932481s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-187266 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-207200
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-207200: (1.319150294s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-207200
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-207200: (25.033204559s)
--- PASS: TestMountStart/serial/RestartStopped (26.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207200 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0419 20:32:10.227378  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.992633232s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-151935 -- rollout status deployment/busybox: (3.905990477s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-f2s7v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-td4zn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-f2s7v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-td4zn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-f2s7v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-td4zn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-f2s7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-f2s7v -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-td4zn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151935 -- exec busybox-fc5497c4f-td4zn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-151935 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-151935 -v 3 --alsologtostderr: (41.107717405s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-151935 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp testdata/cp-test.txt multinode-151935:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935:/home/docker/cp-test.txt multinode-151935-m02:/home/docker/cp-test_multinode-151935_multinode-151935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test_multinode-151935_multinode-151935-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935:/home/docker/cp-test.txt multinode-151935-m03:/home/docker/cp-test_multinode-151935_multinode-151935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test_multinode-151935_multinode-151935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp testdata/cp-test.txt multinode-151935-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt multinode-151935:/home/docker/cp-test_multinode-151935-m02_multinode-151935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test_multinode-151935-m02_multinode-151935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m02:/home/docker/cp-test.txt multinode-151935-m03:/home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test_multinode-151935-m02_multinode-151935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp testdata/cp-test.txt multinode-151935-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3456807115/001/cp-test_multinode-151935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt multinode-151935:/home/docker/cp-test_multinode-151935-m03_multinode-151935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935 "sudo cat /home/docker/cp-test_multinode-151935-m03_multinode-151935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 cp multinode-151935-m03:/home/docker/cp-test.txt multinode-151935-m02:/home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 ssh -n multinode-151935-m02 "sudo cat /home/docker/cp-test_multinode-151935-m03_multinode-151935-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-151935 node stop m03: (1.631737063s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151935 status: exit status 7 (440.429996ms)

                                                
                                                
-- stdout --
	multinode-151935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr: exit status 7 (441.715888ms)

                                                
                                                
-- stdout --
	multinode-151935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 20:33:54.885897  406293 out.go:291] Setting OutFile to fd 1 ...
	I0419 20:33:54.886012  406293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:33:54.886020  406293 out.go:304] Setting ErrFile to fd 2...
	I0419 20:33:54.886024  406293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 20:33:54.886237  406293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18669-366597/.minikube/bin
	I0419 20:33:54.886419  406293 out.go:298] Setting JSON to false
	I0419 20:33:54.886445  406293 mustload.go:65] Loading cluster: multinode-151935
	I0419 20:33:54.886525  406293 notify.go:220] Checking for updates...
	I0419 20:33:54.886829  406293 config.go:182] Loaded profile config "multinode-151935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0419 20:33:54.886842  406293 status.go:255] checking status of multinode-151935 ...
	I0419 20:33:54.887220  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:54.887271  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:54.906022  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0419 20:33:54.906440  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:54.907041  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:54.907063  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:54.907401  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:54.907566  406293 main.go:141] libmachine: (multinode-151935) Calling .GetState
	I0419 20:33:54.909067  406293 status.go:330] multinode-151935 host status = "Running" (err=<nil>)
	I0419 20:33:54.909087  406293 host.go:66] Checking if "multinode-151935" exists ...
	I0419 20:33:54.909360  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:54.909410  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:54.924379  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0419 20:33:54.924920  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:54.925545  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:54.925587  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:54.925915  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:54.926082  406293 main.go:141] libmachine: (multinode-151935) Calling .GetIP
	I0419 20:33:54.928670  406293 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:33:54.929087  406293 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:33:54.929121  406293 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:33:54.929223  406293 host.go:66] Checking if "multinode-151935" exists ...
	I0419 20:33:54.929515  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:54.929551  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:54.945548  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0419 20:33:54.945958  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:54.946410  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:54.946434  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:54.946743  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:54.946927  406293 main.go:141] libmachine: (multinode-151935) Calling .DriverName
	I0419 20:33:54.947125  406293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:33:54.947164  406293 main.go:141] libmachine: (multinode-151935) Calling .GetSSHHostname
	I0419 20:33:54.949637  406293 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:33:54.950109  406293 main.go:141] libmachine: (multinode-151935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:48:a5", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:31:29 +0000 UTC Type:0 Mac:52:54:00:90:48:a5 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-151935 Clientid:01:52:54:00:90:48:a5}
	I0419 20:33:54.950135  406293 main.go:141] libmachine: (multinode-151935) DBG | domain multinode-151935 has defined IP address 192.168.39.193 and MAC address 52:54:00:90:48:a5 in network mk-multinode-151935
	I0419 20:33:54.950188  406293 main.go:141] libmachine: (multinode-151935) Calling .GetSSHPort
	I0419 20:33:54.950349  406293 main.go:141] libmachine: (multinode-151935) Calling .GetSSHKeyPath
	I0419 20:33:54.950467  406293 main.go:141] libmachine: (multinode-151935) Calling .GetSSHUsername
	I0419 20:33:54.950634  406293 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935/id_rsa Username:docker}
	I0419 20:33:55.033766  406293 ssh_runner.go:195] Run: systemctl --version
	I0419 20:33:55.040737  406293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:33:55.057313  406293 kubeconfig.go:125] found "multinode-151935" server: "https://192.168.39.193:8443"
	I0419 20:33:55.057346  406293 api_server.go:166] Checking apiserver status ...
	I0419 20:33:55.057399  406293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 20:33:55.071440  406293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0419 20:33:55.081637  406293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0419 20:33:55.081697  406293 ssh_runner.go:195] Run: ls
	I0419 20:33:55.087194  406293 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8443/healthz ...
	I0419 20:33:55.092101  406293 api_server.go:279] https://192.168.39.193:8443/healthz returned 200:
	ok
	I0419 20:33:55.092130  406293 status.go:422] multinode-151935 apiserver status = Running (err=<nil>)
	I0419 20:33:55.092141  406293 status.go:257] multinode-151935 status: &{Name:multinode-151935 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:33:55.092163  406293 status.go:255] checking status of multinode-151935-m02 ...
	I0419 20:33:55.092515  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:55.092574  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:55.108436  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0419 20:33:55.108877  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:55.109333  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:55.109359  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:55.109728  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:55.109950  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetState
	I0419 20:33:55.111589  406293 status.go:330] multinode-151935-m02 host status = "Running" (err=<nil>)
	I0419 20:33:55.111612  406293 host.go:66] Checking if "multinode-151935-m02" exists ...
	I0419 20:33:55.111967  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:55.112022  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:55.127099  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0419 20:33:55.127614  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:55.128077  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:55.128099  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:55.128422  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:55.128605  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetIP
	I0419 20:33:55.131216  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | domain multinode-151935-m02 has defined MAC address 52:54:00:91:17:ce in network mk-multinode-151935
	I0419 20:33:55.131673  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:17:ce", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:32:31 +0000 UTC Type:0 Mac:52:54:00:91:17:ce Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-151935-m02 Clientid:01:52:54:00:91:17:ce}
	I0419 20:33:55.131707  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | domain multinode-151935-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:91:17:ce in network mk-multinode-151935
	I0419 20:33:55.131878  406293 host.go:66] Checking if "multinode-151935-m02" exists ...
	I0419 20:33:55.132222  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:55.132269  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:55.147524  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0419 20:33:55.147969  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:55.148384  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:55.148405  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:55.148717  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:55.148897  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .DriverName
	I0419 20:33:55.149123  406293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0419 20:33:55.149145  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetSSHHostname
	I0419 20:33:55.151796  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | domain multinode-151935-m02 has defined MAC address 52:54:00:91:17:ce in network mk-multinode-151935
	I0419 20:33:55.152174  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:17:ce", ip: ""} in network mk-multinode-151935: {Iface:virbr1 ExpiryTime:2024-04-19 21:32:31 +0000 UTC Type:0 Mac:52:54:00:91:17:ce Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-151935-m02 Clientid:01:52:54:00:91:17:ce}
	I0419 20:33:55.152203  406293 main.go:141] libmachine: (multinode-151935-m02) DBG | domain multinode-151935-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:91:17:ce in network mk-multinode-151935
	I0419 20:33:55.152350  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetSSHPort
	I0419 20:33:55.152526  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetSSHKeyPath
	I0419 20:33:55.152682  406293 main.go:141] libmachine: (multinode-151935-m02) Calling .GetSSHUsername
	I0419 20:33:55.152808  406293 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18669-366597/.minikube/machines/multinode-151935-m02/id_rsa Username:docker}
	I0419 20:33:55.232024  406293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 20:33:55.246937  406293 status.go:257] multinode-151935-m02 status: &{Name:multinode-151935-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0419 20:33:55.246981  406293 status.go:255] checking status of multinode-151935-m03 ...
	I0419 20:33:55.247375  406293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0419 20:33:55.247434  406293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0419 20:33:55.263314  406293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0419 20:33:55.263757  406293 main.go:141] libmachine: () Calling .GetVersion
	I0419 20:33:55.264261  406293 main.go:141] libmachine: Using API Version  1
	I0419 20:33:55.264284  406293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0419 20:33:55.264626  406293 main.go:141] libmachine: () Calling .GetMachineName
	I0419 20:33:55.264837  406293 main.go:141] libmachine: (multinode-151935-m03) Calling .GetState
	I0419 20:33:55.266305  406293 status.go:330] multinode-151935-m03 host status = "Stopped" (err=<nil>)
	I0419 20:33:55.266331  406293 status.go:343] host is not running, skipping remaining checks
	I0419 20:33:55.266337  406293 status.go:257] multinode-151935-m03 status: &{Name:multinode-151935-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-151935 node start m03 -v=7 --alsologtostderr: (27.711594208s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-151935 node delete m03: (1.89512091s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151935 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0419 20:42:10.227213  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151935 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.07096549s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151935 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151935
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151935-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-151935-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.597055ms)

                                                
                                                
-- stdout --
	* [multinode-151935-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-151935-m02' is duplicated with machine name 'multinode-151935-m02' in profile 'multinode-151935'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151935-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151935-m03 --driver=kvm2  --container-runtime=crio: (44.025063776s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-151935
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-151935: exit status 80 (233.685489ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-151935 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-151935-m03 already exists in multinode-151935-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-151935-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.17s)

                                                
                                    
x
+
TestScheduledStopUnix (115.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-941464 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-941464 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.988144933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-941464 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-941464 -n scheduled-stop-941464
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-941464 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-941464 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-941464 -n scheduled-stop-941464
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-941464
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-941464 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-941464
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-941464: exit status 7 (86.286501ms)

                                                
                                                
-- stdout --
	scheduled-stop-941464
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-941464 -n scheduled-stop-941464
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-941464 -n scheduled-stop-941464: exit status 7 (77.352666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-941464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-941464
--- PASS: TestScheduledStopUnix (115.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (116.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.647810260 start -p running-upgrade-683947 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.647810260 start -p running-upgrade-683947 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.979012224s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-683947 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0419 20:56:53.277452  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-683947 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.108407587s)
helpers_test.go:175: Cleaning up "running-upgrade-683947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-683947
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-683947: (1.211178457s)
--- PASS: TestRunningBinaryUpgrade (116.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (113.168594ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-097851] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18669
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18669-366597/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18669-366597/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (126.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097851 --driver=kvm2  --container-runtime=crio
E0419 20:52:10.227959  373998 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18669-366597/.minikube/profiles/functional-410415/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097851 --driver=kvm2  --container-runtime=crio: (2m6.005786945s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-097851 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (126.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.667869678s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-097851 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-097851 status -o json: exit status 2 (244.74876ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-097851","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-097851
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-097851: (1.035380354s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097851 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.632508959s)
--- PASS: TestNoKubernetes/serial/Start (27.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3759258841 start -p stopped-upgrade-192844 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3759258841 start -p stopped-upgrade-192844 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m11.085711534s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3759258841 -p stopped-upgrade-192844 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3759258841 -p stopped-upgrade-192844 stop: (2.134507383s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-192844 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-192844 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.006225601s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-097851 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-097851 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.352146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-097851
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-097851: (1.352927548s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-097851 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-097851 --driver=kvm2  --container-runtime=crio: (42.591274377s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-097851 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-097851 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.025094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (102.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-635451 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-635451 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.72762886s)
--- PASS: TestPause/serial/Start (102.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-192844
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard