Test Report: KVM_Linux_crio 18665

                    
                      dfbe577bff734bd70c7906dfbd0bc89e038b5d72:2024-04-17:34073
                    
                

Test fail (11/207)

x
+
TestAddons/Setup (2400.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-221213 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-221213 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.93330849s)

                                                
                                                
-- stdout --
	* [addons-221213] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-221213" primary control-plane node in "addons-221213" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-221213 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	* Verifying registry addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-221213 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, default-storageclass, cloud-spanner, metrics-server, nvidia-device-plugin, storage-provisioner, helm-tiller, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 17:59:14.089515   83763 out.go:291] Setting OutFile to fd 1 ...
	I0417 17:59:14.089781   83763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:59:14.089792   83763 out.go:304] Setting ErrFile to fd 2...
	I0417 17:59:14.089798   83763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:59:14.089990   83763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 17:59:14.090657   83763 out.go:298] Setting JSON to false
	I0417 17:59:14.091529   83763 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6102,"bootTime":1713370652,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 17:59:14.091597   83763 start.go:139] virtualization: kvm guest
	I0417 17:59:14.094178   83763 out.go:177] * [addons-221213] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 17:59:14.095864   83763 notify.go:220] Checking for updates...
	I0417 17:59:14.095896   83763 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 17:59:14.097372   83763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 17:59:14.098730   83763 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 17:59:14.100028   83763 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 17:59:14.101338   83763 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 17:59:14.102722   83763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 17:59:14.104451   83763 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 17:59:14.136204   83763 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 17:59:14.137528   83763 start.go:297] selected driver: kvm2
	I0417 17:59:14.137542   83763 start.go:901] validating driver "kvm2" against <nil>
	I0417 17:59:14.137553   83763 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 17:59:14.138260   83763 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:59:14.138328   83763 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 17:59:14.152845   83763 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 17:59:14.152896   83763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 17:59:14.153105   83763 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 17:59:14.153175   83763 cni.go:84] Creating CNI manager for ""
	I0417 17:59:14.153187   83763 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 17:59:14.153194   83763 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 17:59:14.153253   83763 start.go:340] cluster config:
	{Name:addons-221213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-221213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 17:59:14.153363   83763 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:59:14.155919   83763 out.go:177] * Starting "addons-221213" primary control-plane node in "addons-221213" cluster
	I0417 17:59:14.157184   83763 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 17:59:14.157225   83763 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 17:59:14.157235   83763 cache.go:56] Caching tarball of preloaded images
	I0417 17:59:14.157323   83763 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 17:59:14.157337   83763 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 17:59:14.157618   83763 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/config.json ...
	I0417 17:59:14.157639   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/config.json: {Name:mk2c661e0c0a474805582b4dea8dc7799593c15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:14.157791   83763 start.go:360] acquireMachinesLock for addons-221213: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 17:59:14.157855   83763 start.go:364] duration metric: took 43.171µs to acquireMachinesLock for "addons-221213"
	I0417 17:59:14.157880   83763 start.go:93] Provisioning new machine with config: &{Name:addons-221213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:addons-221213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 17:59:14.157939   83763 start.go:125] createHost starting for "" (driver="kvm2")
	I0417 17:59:14.159575   83763 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0417 17:59:14.159709   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 17:59:14.159759   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 17:59:14.174278   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0417 17:59:14.174744   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 17:59:14.175375   83763 main.go:141] libmachine: Using API Version  1
	I0417 17:59:14.175394   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 17:59:14.175811   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 17:59:14.176024   83763 main.go:141] libmachine: (addons-221213) Calling .GetMachineName
	I0417 17:59:14.176180   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:14.176348   83763 start.go:159] libmachine.API.Create for "addons-221213" (driver="kvm2")
	I0417 17:59:14.176387   83763 client.go:168] LocalClient.Create starting
	I0417 17:59:14.176438   83763 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 17:59:14.392820   83763 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 17:59:14.543267   83763 main.go:141] libmachine: Running pre-create checks...
	I0417 17:59:14.543294   83763 main.go:141] libmachine: (addons-221213) Calling .PreCreateCheck
	I0417 17:59:14.543887   83763 main.go:141] libmachine: (addons-221213) Calling .GetConfigRaw
	I0417 17:59:14.544321   83763 main.go:141] libmachine: Creating machine...
	I0417 17:59:14.544338   83763 main.go:141] libmachine: (addons-221213) Calling .Create
	I0417 17:59:14.544472   83763 main.go:141] libmachine: (addons-221213) Creating KVM machine...
	I0417 17:59:14.545970   83763 main.go:141] libmachine: (addons-221213) DBG | found existing default KVM network
	I0417 17:59:14.546750   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:14.546569   83785 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0417 17:59:14.546777   83763 main.go:141] libmachine: (addons-221213) DBG | created network xml: 
	I0417 17:59:14.546801   83763 main.go:141] libmachine: (addons-221213) DBG | <network>
	I0417 17:59:14.546815   83763 main.go:141] libmachine: (addons-221213) DBG |   <name>mk-addons-221213</name>
	I0417 17:59:14.546825   83763 main.go:141] libmachine: (addons-221213) DBG |   <dns enable='no'/>
	I0417 17:59:14.546835   83763 main.go:141] libmachine: (addons-221213) DBG |   
	I0417 17:59:14.546847   83763 main.go:141] libmachine: (addons-221213) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0417 17:59:14.546861   83763 main.go:141] libmachine: (addons-221213) DBG |     <dhcp>
	I0417 17:59:14.546894   83763 main.go:141] libmachine: (addons-221213) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0417 17:59:14.546919   83763 main.go:141] libmachine: (addons-221213) DBG |     </dhcp>
	I0417 17:59:14.546930   83763 main.go:141] libmachine: (addons-221213) DBG |   </ip>
	I0417 17:59:14.546946   83763 main.go:141] libmachine: (addons-221213) DBG |   
	I0417 17:59:14.546959   83763 main.go:141] libmachine: (addons-221213) DBG | </network>
	I0417 17:59:14.546970   83763 main.go:141] libmachine: (addons-221213) DBG | 
	I0417 17:59:14.552053   83763 main.go:141] libmachine: (addons-221213) DBG | trying to create private KVM network mk-addons-221213 192.168.39.0/24...
	I0417 17:59:14.618149   83763 main.go:141] libmachine: (addons-221213) DBG | private KVM network mk-addons-221213 192.168.39.0/24 created
	I0417 17:59:14.618182   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:14.618086   83785 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 17:59:14.618214   83763 main.go:141] libmachine: (addons-221213) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213 ...
	I0417 17:59:14.618235   83763 main.go:141] libmachine: (addons-221213) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 17:59:14.618250   83763 main.go:141] libmachine: (addons-221213) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 17:59:14.853831   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:14.853714   83785 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa...
	I0417 17:59:15.194855   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:15.194692   83785 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/addons-221213.rawdisk...
	I0417 17:59:15.194895   83763 main.go:141] libmachine: (addons-221213) DBG | Writing magic tar header
	I0417 17:59:15.194907   83763 main.go:141] libmachine: (addons-221213) DBG | Writing SSH key tar header
	I0417 17:59:15.194915   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:15.194808   83785 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213 ...
	I0417 17:59:15.194926   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213
	I0417 17:59:15.194933   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 17:59:15.194941   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213 (perms=drwx------)
	I0417 17:59:15.194948   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 17:59:15.194959   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 17:59:15.194968   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 17:59:15.194975   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 17:59:15.194985   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 17:59:15.194991   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 17:59:15.194999   83763 main.go:141] libmachine: (addons-221213) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 17:59:15.195004   83763 main.go:141] libmachine: (addons-221213) Creating domain...
	I0417 17:59:15.195014   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 17:59:15.195025   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home/jenkins
	I0417 17:59:15.195031   83763 main.go:141] libmachine: (addons-221213) DBG | Checking permissions on dir: /home
	I0417 17:59:15.195040   83763 main.go:141] libmachine: (addons-221213) DBG | Skipping /home - not owner
	I0417 17:59:15.196213   83763 main.go:141] libmachine: (addons-221213) define libvirt domain using xml: 
	I0417 17:59:15.196243   83763 main.go:141] libmachine: (addons-221213) <domain type='kvm'>
	I0417 17:59:15.196255   83763 main.go:141] libmachine: (addons-221213)   <name>addons-221213</name>
	I0417 17:59:15.196271   83763 main.go:141] libmachine: (addons-221213)   <memory unit='MiB'>4000</memory>
	I0417 17:59:15.196282   83763 main.go:141] libmachine: (addons-221213)   <vcpu>2</vcpu>
	I0417 17:59:15.196290   83763 main.go:141] libmachine: (addons-221213)   <features>
	I0417 17:59:15.196300   83763 main.go:141] libmachine: (addons-221213)     <acpi/>
	I0417 17:59:15.196325   83763 main.go:141] libmachine: (addons-221213)     <apic/>
	I0417 17:59:15.196336   83763 main.go:141] libmachine: (addons-221213)     <pae/>
	I0417 17:59:15.196347   83763 main.go:141] libmachine: (addons-221213)     
	I0417 17:59:15.196381   83763 main.go:141] libmachine: (addons-221213)   </features>
	I0417 17:59:15.196410   83763 main.go:141] libmachine: (addons-221213)   <cpu mode='host-passthrough'>
	I0417 17:59:15.196423   83763 main.go:141] libmachine: (addons-221213)   
	I0417 17:59:15.196436   83763 main.go:141] libmachine: (addons-221213)   </cpu>
	I0417 17:59:15.196450   83763 main.go:141] libmachine: (addons-221213)   <os>
	I0417 17:59:15.196461   83763 main.go:141] libmachine: (addons-221213)     <type>hvm</type>
	I0417 17:59:15.196473   83763 main.go:141] libmachine: (addons-221213)     <boot dev='cdrom'/>
	I0417 17:59:15.196488   83763 main.go:141] libmachine: (addons-221213)     <boot dev='hd'/>
	I0417 17:59:15.196501   83763 main.go:141] libmachine: (addons-221213)     <bootmenu enable='no'/>
	I0417 17:59:15.196515   83763 main.go:141] libmachine: (addons-221213)   </os>
	I0417 17:59:15.196526   83763 main.go:141] libmachine: (addons-221213)   <devices>
	I0417 17:59:15.196534   83763 main.go:141] libmachine: (addons-221213)     <disk type='file' device='cdrom'>
	I0417 17:59:15.196551   83763 main.go:141] libmachine: (addons-221213)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/boot2docker.iso'/>
	I0417 17:59:15.196570   83763 main.go:141] libmachine: (addons-221213)       <target dev='hdc' bus='scsi'/>
	I0417 17:59:15.196584   83763 main.go:141] libmachine: (addons-221213)       <readonly/>
	I0417 17:59:15.196594   83763 main.go:141] libmachine: (addons-221213)     </disk>
	I0417 17:59:15.196609   83763 main.go:141] libmachine: (addons-221213)     <disk type='file' device='disk'>
	I0417 17:59:15.196622   83763 main.go:141] libmachine: (addons-221213)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 17:59:15.196639   83763 main.go:141] libmachine: (addons-221213)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/addons-221213.rawdisk'/>
	I0417 17:59:15.196661   83763 main.go:141] libmachine: (addons-221213)       <target dev='hda' bus='virtio'/>
	I0417 17:59:15.196684   83763 main.go:141] libmachine: (addons-221213)     </disk>
	I0417 17:59:15.196699   83763 main.go:141] libmachine: (addons-221213)     <interface type='network'>
	I0417 17:59:15.196721   83763 main.go:141] libmachine: (addons-221213)       <source network='mk-addons-221213'/>
	I0417 17:59:15.196735   83763 main.go:141] libmachine: (addons-221213)       <model type='virtio'/>
	I0417 17:59:15.196750   83763 main.go:141] libmachine: (addons-221213)     </interface>
	I0417 17:59:15.196788   83763 main.go:141] libmachine: (addons-221213)     <interface type='network'>
	I0417 17:59:15.196811   83763 main.go:141] libmachine: (addons-221213)       <source network='default'/>
	I0417 17:59:15.196844   83763 main.go:141] libmachine: (addons-221213)       <model type='virtio'/>
	I0417 17:59:15.196867   83763 main.go:141] libmachine: (addons-221213)     </interface>
	I0417 17:59:15.196880   83763 main.go:141] libmachine: (addons-221213)     <serial type='pty'>
	I0417 17:59:15.196889   83763 main.go:141] libmachine: (addons-221213)       <target port='0'/>
	I0417 17:59:15.196898   83763 main.go:141] libmachine: (addons-221213)     </serial>
	I0417 17:59:15.196909   83763 main.go:141] libmachine: (addons-221213)     <console type='pty'>
	I0417 17:59:15.196921   83763 main.go:141] libmachine: (addons-221213)       <target type='serial' port='0'/>
	I0417 17:59:15.196931   83763 main.go:141] libmachine: (addons-221213)     </console>
	I0417 17:59:15.196940   83763 main.go:141] libmachine: (addons-221213)     <rng model='virtio'>
	I0417 17:59:15.196957   83763 main.go:141] libmachine: (addons-221213)       <backend model='random'>/dev/random</backend>
	I0417 17:59:15.196969   83763 main.go:141] libmachine: (addons-221213)     </rng>
	I0417 17:59:15.196978   83763 main.go:141] libmachine: (addons-221213)     
	I0417 17:59:15.196984   83763 main.go:141] libmachine: (addons-221213)     
	I0417 17:59:15.196993   83763 main.go:141] libmachine: (addons-221213)   </devices>
	I0417 17:59:15.197006   83763 main.go:141] libmachine: (addons-221213) </domain>
	I0417 17:59:15.197012   83763 main.go:141] libmachine: (addons-221213) 
	I0417 17:59:15.201268   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:12:9c:2b in network default
	I0417 17:59:15.202047   83763 main.go:141] libmachine: (addons-221213) Ensuring networks are active...
	I0417 17:59:15.202069   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:15.202754   83763 main.go:141] libmachine: (addons-221213) Ensuring network default is active
	I0417 17:59:15.203048   83763 main.go:141] libmachine: (addons-221213) Ensuring network mk-addons-221213 is active
	I0417 17:59:15.203491   83763 main.go:141] libmachine: (addons-221213) Getting domain xml...
	I0417 17:59:15.204164   83763 main.go:141] libmachine: (addons-221213) Creating domain...
	I0417 17:59:16.392813   83763 main.go:141] libmachine: (addons-221213) Waiting to get IP...
	I0417 17:59:16.394197   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:16.394783   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:16.394815   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:16.394668   83785 retry.go:31] will retry after 285.013009ms: waiting for machine to come up
	I0417 17:59:16.681331   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:16.681869   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:16.681902   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:16.681817   83785 retry.go:31] will retry after 290.975776ms: waiting for machine to come up
	I0417 17:59:16.974285   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:16.974801   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:16.974838   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:16.974749   83785 retry.go:31] will retry after 410.138679ms: waiting for machine to come up
	I0417 17:59:17.386355   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:17.386761   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:17.386796   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:17.386709   83785 retry.go:31] will retry after 450.714694ms: waiting for machine to come up
	I0417 17:59:17.839527   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:17.839963   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:17.839994   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:17.839899   83785 retry.go:31] will retry after 580.497754ms: waiting for machine to come up
	I0417 17:59:18.421657   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:18.422106   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:18.422136   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:18.422051   83785 retry.go:31] will retry after 863.549256ms: waiting for machine to come up
	I0417 17:59:19.287673   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:19.288061   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:19.288093   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:19.288006   83785 retry.go:31] will retry after 1.078557693s: waiting for machine to come up
	I0417 17:59:20.368281   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:20.368809   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:20.368837   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:20.368746   83785 retry.go:31] will retry after 1.142808151s: waiting for machine to come up
	I0417 17:59:21.512990   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:21.513402   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:21.513434   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:21.513352   83785 retry.go:31] will retry after 1.143736039s: waiting for machine to come up
	I0417 17:59:22.658657   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:22.658988   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:22.659016   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:22.658933   83785 retry.go:31] will retry after 1.763944794s: waiting for machine to come up
	I0417 17:59:24.424509   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:24.424929   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:24.424959   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:24.424897   83785 retry.go:31] will retry after 2.007443383s: waiting for machine to come up
	I0417 17:59:26.433970   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:26.434429   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:26.434461   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:26.434364   83785 retry.go:31] will retry after 2.64291792s: waiting for machine to come up
	I0417 17:59:29.080132   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:29.080545   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:29.080569   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:29.080505   83785 retry.go:31] will retry after 3.644286795s: waiting for machine to come up
	I0417 17:59:32.728795   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:32.729218   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find current IP address of domain addons-221213 in network mk-addons-221213
	I0417 17:59:32.729242   83763 main.go:141] libmachine: (addons-221213) DBG | I0417 17:59:32.729154   83785 retry.go:31] will retry after 3.815754465s: waiting for machine to come up
	I0417 17:59:36.547794   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.548279   83763 main.go:141] libmachine: (addons-221213) Found IP for machine: 192.168.39.199
	I0417 17:59:36.548305   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has current primary IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.548311   83763 main.go:141] libmachine: (addons-221213) Reserving static IP address...
	I0417 17:59:36.548709   83763 main.go:141] libmachine: (addons-221213) DBG | unable to find host DHCP lease matching {name: "addons-221213", mac: "52:54:00:ca:2d:a2", ip: "192.168.39.199"} in network mk-addons-221213
	I0417 17:59:36.623683   83763 main.go:141] libmachine: (addons-221213) Reserved static IP address: 192.168.39.199
	I0417 17:59:36.623715   83763 main.go:141] libmachine: (addons-221213) DBG | Getting to WaitForSSH function...
	I0417 17:59:36.623724   83763 main.go:141] libmachine: (addons-221213) Waiting for SSH to be available...
	I0417 17:59:36.626601   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.627047   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:36.627071   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.627228   83763 main.go:141] libmachine: (addons-221213) DBG | Using SSH client type: external
	I0417 17:59:36.627251   83763 main.go:141] libmachine: (addons-221213) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa (-rw-------)
	I0417 17:59:36.627295   83763 main.go:141] libmachine: (addons-221213) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 17:59:36.627313   83763 main.go:141] libmachine: (addons-221213) DBG | About to run SSH command:
	I0417 17:59:36.627325   83763 main.go:141] libmachine: (addons-221213) DBG | exit 0
	I0417 17:59:36.752937   83763 main.go:141] libmachine: (addons-221213) DBG | SSH cmd err, output: <nil>: 
	I0417 17:59:36.753268   83763 main.go:141] libmachine: (addons-221213) KVM machine creation complete!
	I0417 17:59:36.753585   83763 main.go:141] libmachine: (addons-221213) Calling .GetConfigRaw
	I0417 17:59:36.754210   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:36.754480   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:36.754679   83763 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 17:59:36.754739   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 17:59:36.755989   83763 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 17:59:36.756004   83763 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 17:59:36.756010   83763 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 17:59:36.756019   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:36.758339   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.758649   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:36.758678   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.758773   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:36.758938   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.759165   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.759327   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:36.759513   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:36.759809   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:36.759822   83763 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 17:59:36.864138   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 17:59:36.864178   83763 main.go:141] libmachine: Detecting the provisioner...
	I0417 17:59:36.864191   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:36.867042   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.867442   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:36.867470   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.867633   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:36.867854   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.868021   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.868141   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:36.868301   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:36.868504   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:36.868523   83763 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 17:59:36.973489   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 17:59:36.973610   83763 main.go:141] libmachine: found compatible host: buildroot
	I0417 17:59:36.973622   83763 main.go:141] libmachine: Provisioning with buildroot...
	I0417 17:59:36.973630   83763 main.go:141] libmachine: (addons-221213) Calling .GetMachineName
	I0417 17:59:36.973892   83763 buildroot.go:166] provisioning hostname "addons-221213"
	I0417 17:59:36.973916   83763 main.go:141] libmachine: (addons-221213) Calling .GetMachineName
	I0417 17:59:36.974106   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:36.976748   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.977187   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:36.977217   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:36.977364   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:36.977542   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.977701   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:36.977841   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:36.978052   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:36.978222   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:36.978234   83763 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-221213 && echo "addons-221213" | sudo tee /etc/hostname
	I0417 17:59:37.096273   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-221213
	
	I0417 17:59:37.096307   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.098945   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.099330   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.099355   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.099495   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:37.099629   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.099835   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.099970   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:37.100116   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:37.100364   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:37.100388   83763 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-221213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-221213/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-221213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 17:59:37.214907   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 17:59:37.214942   83763 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 17:59:37.214997   83763 buildroot.go:174] setting up certificates
	I0417 17:59:37.215012   83763 provision.go:84] configureAuth start
	I0417 17:59:37.215033   83763 main.go:141] libmachine: (addons-221213) Calling .GetMachineName
	I0417 17:59:37.215387   83763 main.go:141] libmachine: (addons-221213) Calling .GetIP
	I0417 17:59:37.218162   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.218513   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.218544   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.218668   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.220720   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.221033   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.221075   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.221178   83763 provision.go:143] copyHostCerts
	I0417 17:59:37.221249   83763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 17:59:37.221417   83763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 17:59:37.221525   83763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 17:59:37.221599   83763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.addons-221213 san=[127.0.0.1 192.168.39.199 addons-221213 localhost minikube]
	I0417 17:59:37.352602   83763 provision.go:177] copyRemoteCerts
	I0417 17:59:37.352674   83763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 17:59:37.352715   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.355675   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.356071   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.356104   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.356244   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:37.356425   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.356635   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:37.356739   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 17:59:37.439152   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 17:59:37.464784   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 17:59:37.491099   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 17:59:37.516488   83763 provision.go:87] duration metric: took 301.457845ms to configureAuth
	I0417 17:59:37.516517   83763 buildroot.go:189] setting minikube options for container-runtime
	I0417 17:59:37.516748   83763 config.go:182] Loaded profile config "addons-221213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 17:59:37.516860   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.519452   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.519822   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.519872   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.520053   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:37.520273   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.520425   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.520569   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:37.520732   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:37.520983   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:37.521002   83763 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 17:59:37.789676   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 17:59:37.789710   83763 main.go:141] libmachine: Checking connection to Docker...
	I0417 17:59:37.789719   83763 main.go:141] libmachine: (addons-221213) Calling .GetURL
	I0417 17:59:37.791357   83763 main.go:141] libmachine: (addons-221213) DBG | Using libvirt version 6000000
	I0417 17:59:37.794108   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.794497   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.794524   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.794711   83763 main.go:141] libmachine: Docker is up and running!
	I0417 17:59:37.794727   83763 main.go:141] libmachine: Reticulating splines...
	I0417 17:59:37.794735   83763 client.go:171] duration metric: took 23.618336261s to LocalClient.Create
	I0417 17:59:37.794769   83763 start.go:167] duration metric: took 23.618424878s to libmachine.API.Create "addons-221213"
	I0417 17:59:37.794779   83763 start.go:293] postStartSetup for "addons-221213" (driver="kvm2")
	I0417 17:59:37.794792   83763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 17:59:37.794808   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:37.795078   83763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 17:59:37.795112   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.797338   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.797635   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.797675   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.797798   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:37.797969   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.798135   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:37.798289   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 17:59:37.880083   83763 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 17:59:37.884431   83763 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 17:59:37.884457   83763 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 17:59:37.884542   83763 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 17:59:37.884575   83763 start.go:296] duration metric: took 89.789251ms for postStartSetup
	I0417 17:59:37.884622   83763 main.go:141] libmachine: (addons-221213) Calling .GetConfigRaw
	I0417 17:59:37.885423   83763 main.go:141] libmachine: (addons-221213) Calling .GetIP
	I0417 17:59:37.888463   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.888833   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.888864   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.889120   83763 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/config.json ...
	I0417 17:59:37.889356   83763 start.go:128] duration metric: took 23.7314044s to createHost
	I0417 17:59:37.889387   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:37.891587   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.891949   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:37.891978   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:37.892195   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:37.892444   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.892606   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:37.892748   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:37.892968   83763 main.go:141] libmachine: Using SSH client type: native
	I0417 17:59:37.893152   83763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0417 17:59:37.893165   83763 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0417 17:59:37.997790   83763 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713376777.978679713
	
	I0417 17:59:37.997812   83763 fix.go:216] guest clock: 1713376777.978679713
	I0417 17:59:37.997820   83763 fix.go:229] Guest: 2024-04-17 17:59:37.978679713 +0000 UTC Remote: 2024-04-17 17:59:37.889371545 +0000 UTC m=+23.846038534 (delta=89.308168ms)
	I0417 17:59:37.997857   83763 fix.go:200] guest clock delta is within tolerance: 89.308168ms
	I0417 17:59:37.997862   83763 start.go:83] releasing machines lock for "addons-221213", held for 23.839993821s
	I0417 17:59:37.997884   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:37.998160   83763 main.go:141] libmachine: (addons-221213) Calling .GetIP
	I0417 17:59:38.001173   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.001578   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:38.001605   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.001811   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:38.002337   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:38.002539   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 17:59:38.002672   83763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 17:59:38.002723   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:38.002825   83763 ssh_runner.go:195] Run: cat /version.json
	I0417 17:59:38.002852   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 17:59:38.005493   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.005607   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.005884   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:38.005936   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.005964   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:38.005984   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:38.006078   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:38.006183   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 17:59:38.006270   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:38.006327   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 17:59:38.006394   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:38.006460   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 17:59:38.006603   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 17:59:38.006623   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 17:59:38.085994   83763 ssh_runner.go:195] Run: systemctl --version
	I0417 17:59:38.105779   83763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 17:59:38.266509   83763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 17:59:38.273673   83763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 17:59:38.273763   83763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 17:59:38.291153   83763 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 17:59:38.291184   83763 start.go:494] detecting cgroup driver to use...
	I0417 17:59:38.291283   83763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 17:59:38.306930   83763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 17:59:38.320890   83763 docker.go:217] disabling cri-docker service (if available) ...
	I0417 17:59:38.320956   83763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 17:59:38.334207   83763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 17:59:38.347611   83763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 17:59:38.459533   83763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 17:59:38.622989   83763 docker.go:233] disabling docker service ...
	I0417 17:59:38.623074   83763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 17:59:38.637388   83763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 17:59:38.650463   83763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 17:59:38.765390   83763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 17:59:38.889375   83763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 17:59:38.903783   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 17:59:38.923932   83763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 17:59:38.923991   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:38.934857   83763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 17:59:38.934962   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:38.945338   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:38.955828   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:38.966361   83763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 17:59:38.976990   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:38.987056   83763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:39.004636   83763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 17:59:39.014838   83763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 17:59:39.024146   83763 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 17:59:39.024224   83763 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 17:59:39.036949   83763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 17:59:39.047055   83763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 17:59:39.160231   83763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 17:59:39.300226   83763 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 17:59:39.300321   83763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 17:59:39.305291   83763 start.go:562] Will wait 60s for crictl version
	I0417 17:59:39.305346   83763 ssh_runner.go:195] Run: which crictl
	I0417 17:59:39.309246   83763 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 17:59:39.350918   83763 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 17:59:39.351002   83763 ssh_runner.go:195] Run: crio --version
	I0417 17:59:39.380019   83763 ssh_runner.go:195] Run: crio --version
	I0417 17:59:39.410444   83763 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 17:59:39.411805   83763 main.go:141] libmachine: (addons-221213) Calling .GetIP
	I0417 17:59:39.414288   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:39.414617   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 17:59:39.414641   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 17:59:39.414960   83763 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 17:59:39.419393   83763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 17:59:39.432999   83763 kubeadm.go:877] updating cluster {Name:addons-221213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0-rc.2 ClusterName:addons-221213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 17:59:39.433114   83763 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 17:59:39.433154   83763 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 17:59:39.468023   83763 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0417 17:59:39.468087   83763 ssh_runner.go:195] Run: which lz4
	I0417 17:59:39.472433   83763 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0417 17:59:39.476752   83763 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0417 17:59:39.476791   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394547972 bytes)
	I0417 17:59:40.932491   83763 crio.go:462] duration metric: took 1.460088607s to copy over tarball
	I0417 17:59:40.932562   83763 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0417 17:59:43.232319   83763 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.299722197s)
	I0417 17:59:43.232350   83763 crio.go:469] duration metric: took 2.29983253s to extract the tarball
	I0417 17:59:43.232358   83763 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 17:59:43.271825   83763 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 17:59:43.315914   83763 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 17:59:43.315943   83763 cache_images.go:84] Images are preloaded, skipping loading
	I0417 17:59:43.315953   83763 kubeadm.go:928] updating node { 192.168.39.199 8443 v1.30.0-rc.2 crio true true} ...
	I0417 17:59:43.316113   83763 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-221213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:addons-221213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 17:59:43.316212   83763 ssh_runner.go:195] Run: crio config
	I0417 17:59:43.363453   83763 cni.go:84] Creating CNI manager for ""
	I0417 17:59:43.363488   83763 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 17:59:43.363505   83763 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 17:59:43.363537   83763 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-221213 NodeName:addons-221213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 17:59:43.363759   83763 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-221213"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 17:59:43.363836   83763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 17:59:43.374819   83763 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 17:59:43.374897   83763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 17:59:43.385212   83763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0417 17:59:43.402441   83763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 17:59:43.419434   83763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0417 17:59:43.437029   83763 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0417 17:59:43.441141   83763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 17:59:43.454397   83763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 17:59:43.578200   83763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 17:59:43.597357   83763 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213 for IP: 192.168.39.199
	I0417 17:59:43.597386   83763 certs.go:194] generating shared ca certs ...
	I0417 17:59:43.597402   83763 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.597551   83763 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 17:59:43.680798   83763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt ...
	I0417 17:59:43.680828   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt: {Name:mk7286f710b302f07f682b16e036d3aaf759645c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.681016   83763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key ...
	I0417 17:59:43.681035   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key: {Name:mk72701f0e875987f4ee296a04840f42d4356f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.681139   83763 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 17:59:43.829644   83763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt ...
	I0417 17:59:43.829675   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt: {Name:mk876cfd904c4038dba3b72cc02bab2c34e4e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.829864   83763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key ...
	I0417 17:59:43.829885   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key: {Name:mk07ca1c0410bfb934a442b193aa15a5997093fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.829980   83763 certs.go:256] generating profile certs ...
	I0417 17:59:43.830120   83763 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.key
	I0417 17:59:43.830143   83763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.crt with IP's: []
	I0417 17:59:43.950982   83763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.crt ...
	I0417 17:59:43.951016   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.crt: {Name:mk44b94611a4a70db64b227a47a044a157f01442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.951236   83763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.key ...
	I0417 17:59:43.951253   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/client.key: {Name:mk03189e684490c4da191d2b643781b9ebc2cbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:43.951353   83763 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key.a3179eec
	I0417 17:59:43.951380   83763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt.a3179eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.199]
	I0417 17:59:44.126410   83763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt.a3179eec ...
	I0417 17:59:44.126442   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt.a3179eec: {Name:mk4d32b0e393b70744dc2c71290503b181c5793a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:44.126646   83763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key.a3179eec ...
	I0417 17:59:44.126666   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key.a3179eec: {Name:mkad462fd521b29e1f55be978691ad6f104981d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:44.126772   83763 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt.a3179eec -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt
	I0417 17:59:44.126886   83763 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key.a3179eec -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key
	I0417 17:59:44.126958   83763 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.key
	I0417 17:59:44.126980   83763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.crt with IP's: []
	I0417 17:59:44.354227   83763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.crt ...
	I0417 17:59:44.354260   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.crt: {Name:mk86244172b13446446c0bb0d67aaa94f0ef528f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:44.354439   83763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.key ...
	I0417 17:59:44.354451   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.key: {Name:mk01440fb440910aff14f326c6eeb31419b140e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:44.354612   83763 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 17:59:44.354648   83763 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 17:59:44.354674   83763 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 17:59:44.354697   83763 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 17:59:44.355392   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 17:59:44.393374   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 17:59:44.432200   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 17:59:44.459344   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 17:59:44.484612   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 17:59:44.509737   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 17:59:44.534550   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 17:59:44.560119   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/addons-221213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 17:59:44.585567   83763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 17:59:44.610574   83763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 17:59:44.628503   83763 ssh_runner.go:195] Run: openssl version
	I0417 17:59:44.634455   83763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 17:59:44.648689   83763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 17:59:44.653758   83763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 17:59:44.653820   83763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 17:59:44.660266   83763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 17:59:44.674558   83763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 17:59:44.679170   83763 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 17:59:44.679223   83763 kubeadm.go:391] StartCluster: {Name:addons-221213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-r
c.2 ClusterName:addons-221213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 17:59:44.679305   83763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 17:59:44.679354   83763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 17:59:44.726990   83763 cri.go:89] found id: ""
	I0417 17:59:44.727062   83763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 17:59:44.740125   83763 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 17:59:44.753027   83763 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 17:59:44.765790   83763 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 17:59:44.765814   83763 kubeadm.go:156] found existing configuration files:
	
	I0417 17:59:44.765858   83763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 17:59:44.777845   83763 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 17:59:44.777903   83763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 17:59:44.791079   83763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 17:59:44.802130   83763 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 17:59:44.802196   83763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 17:59:44.812924   83763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 17:59:44.823194   83763 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 17:59:44.823253   83763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 17:59:44.834024   83763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 17:59:44.844414   83763 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 17:59:44.844481   83763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 17:59:44.855278   83763 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 17:59:44.906842   83763 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 17:59:44.906943   83763 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 17:59:45.039598   83763 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 17:59:45.039778   83763 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 17:59:45.039919   83763 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 17:59:45.289126   83763 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 17:59:45.445743   83763 out.go:204]   - Generating certificates and keys ...
	I0417 17:59:45.445868   83763 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 17:59:45.445972   83763 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 17:59:45.446086   83763 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 17:59:45.625927   83763 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 17:59:45.846295   83763 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 17:59:45.944566   83763 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 17:59:46.003872   83763 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 17:59:46.004066   83763 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-221213 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0417 17:59:46.097759   83763 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 17:59:46.098017   83763 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-221213 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0417 17:59:46.251280   83763 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 17:59:46.321481   83763 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 17:59:46.449543   83763 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 17:59:46.449765   83763 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 17:59:46.561441   83763 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 17:59:46.812365   83763 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 17:59:47.058257   83763 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 17:59:47.107843   83763 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 17:59:47.299985   83763 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 17:59:47.300600   83763 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 17:59:47.303087   83763 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 17:59:47.304979   83763 out.go:204]   - Booting up control plane ...
	I0417 17:59:47.305108   83763 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 17:59:47.305183   83763 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 17:59:47.305239   83763 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 17:59:47.320872   83763 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 17:59:47.321727   83763 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 17:59:47.321806   83763 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 17:59:47.455217   83763 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 17:59:47.455350   83763 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 17:59:47.958798   83763 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 503.956133ms
	I0417 17:59:47.958902   83763 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 17:59:52.958338   83763 kubeadm.go:309] [api-check] The API server is healthy after 5.001463074s
	I0417 17:59:52.974786   83763 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 17:59:52.986986   83763 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 17:59:53.016359   83763 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 17:59:53.016530   83763 kubeadm.go:309] [mark-control-plane] Marking the node addons-221213 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 17:59:53.028393   83763 kubeadm.go:309] [bootstrap-token] Using token: jixlo2.ceo5x00xibmq7g1q
	I0417 17:59:53.029967   83763 out.go:204]   - Configuring RBAC rules ...
	I0417 17:59:53.030113   83763 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 17:59:53.040592   83763 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 17:59:53.052549   83763 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 17:59:53.056275   83763 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 17:59:53.059926   83763 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 17:59:53.063191   83763 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 17:59:53.365156   83763 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 17:59:53.818976   83763 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 17:59:54.365741   83763 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 17:59:54.366709   83763 kubeadm.go:309] 
	I0417 17:59:54.366797   83763 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 17:59:54.366808   83763 kubeadm.go:309] 
	I0417 17:59:54.366909   83763 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 17:59:54.366920   83763 kubeadm.go:309] 
	I0417 17:59:54.366961   83763 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 17:59:54.367056   83763 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 17:59:54.367138   83763 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 17:59:54.367147   83763 kubeadm.go:309] 
	I0417 17:59:54.367241   83763 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 17:59:54.367251   83763 kubeadm.go:309] 
	I0417 17:59:54.367317   83763 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 17:59:54.367326   83763 kubeadm.go:309] 
	I0417 17:59:54.367398   83763 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 17:59:54.367509   83763 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 17:59:54.367609   83763 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 17:59:54.367618   83763 kubeadm.go:309] 
	I0417 17:59:54.367747   83763 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 17:59:54.367819   83763 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 17:59:54.367827   83763 kubeadm.go:309] 
	I0417 17:59:54.367909   83763 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jixlo2.ceo5x00xibmq7g1q \
	I0417 17:59:54.368013   83763 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 \
	I0417 17:59:54.368072   83763 kubeadm.go:309] 	--control-plane 
	I0417 17:59:54.368090   83763 kubeadm.go:309] 
	I0417 17:59:54.368205   83763 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 17:59:54.368214   83763 kubeadm.go:309] 
	I0417 17:59:54.368332   83763 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jixlo2.ceo5x00xibmq7g1q \
	I0417 17:59:54.368501   83763 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 
	I0417 17:59:54.368881   83763 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 17:59:54.368919   83763 cni.go:84] Creating CNI manager for ""
	I0417 17:59:54.368930   83763 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 17:59:54.371368   83763 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0417 17:59:54.372667   83763 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0417 17:59:54.383856   83763 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0417 17:59:54.405864   83763 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 17:59:54.406001   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:54.406014   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-221213 minikube.k8s.io/updated_at=2024_04_17T17_59_54_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=addons-221213 minikube.k8s.io/primary=true
	I0417 17:59:54.561540   83763 ops.go:34] apiserver oom_adj: -16
	I0417 17:59:54.561722   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:55.061816   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:55.562527   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:56.062188   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:56.562031   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:57.061773   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:57.562765   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:58.062623   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:58.561807   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:59.062727   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 17:59:59.562586   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:00.061891   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:00.562821   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:01.062099   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:01.562064   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:02.062538   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:02.562378   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:03.062782   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:03.562031   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:04.062234   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:04.561989   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:05.061706   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:05.562829   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:06.062588   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:06.562451   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:07.061885   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:07.562690   83763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:00:07.642586   83763 kubeadm.go:1107] duration metric: took 13.236652368s to wait for elevateKubeSystemPrivileges
	W0417 18:00:07.642644   83763 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0417 18:00:07.642656   83763 kubeadm.go:393] duration metric: took 22.96343868s to StartCluster
	I0417 18:00:07.642680   83763 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:00:07.642820   83763 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:00:07.643193   83763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:00:07.643363   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0417 18:00:07.643384   83763 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:00:07.645372   83763 out.go:177] * Verifying Kubernetes components...
	I0417 18:00:07.643481   83763 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0417 18:00:07.643583   83763 config.go:182] Loaded profile config "addons-221213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:00:07.646811   83763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:00:07.645484   83763 addons.go:69] Setting cloud-spanner=true in profile "addons-221213"
	I0417 18:00:07.646883   83763 addons.go:234] Setting addon cloud-spanner=true in "addons-221213"
	I0417 18:00:07.646926   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.645493   83763 addons.go:69] Setting inspektor-gadget=true in profile "addons-221213"
	I0417 18:00:07.646962   83763 addons.go:234] Setting addon inspektor-gadget=true in "addons-221213"
	I0417 18:00:07.646992   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.645500   83763 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-221213"
	I0417 18:00:07.645505   83763 addons.go:69] Setting default-storageclass=true in profile "addons-221213"
	I0417 18:00:07.647126   83763 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-221213"
	I0417 18:00:07.645512   83763 addons.go:69] Setting gcp-auth=true in profile "addons-221213"
	I0417 18:00:07.645508   83763 addons.go:69] Setting registry=true in profile "addons-221213"
	I0417 18:00:07.647292   83763 addons.go:234] Setting addon registry=true in "addons-221213"
	I0417 18:00:07.647322   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.645511   83763 addons.go:69] Setting yakd=true in profile "addons-221213"
	I0417 18:00:07.647365   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.647390   83763 addons.go:234] Setting addon yakd=true in "addons-221213"
	I0417 18:00:07.645516   83763 addons.go:69] Setting helm-tiller=true in profile "addons-221213"
	I0417 18:00:07.647421   83763 addons.go:234] Setting addon helm-tiller=true in "addons-221213"
	I0417 18:00:07.647433   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.647445   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.645518   83763 addons.go:69] Setting metrics-server=true in profile "addons-221213"
	I0417 18:00:07.645520   83763 addons.go:69] Setting storage-provisioner=true in profile "addons-221213"
	I0417 18:00:07.645521   83763 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-221213"
	I0417 18:00:07.645521   83763 addons.go:69] Setting ingress-dns=true in profile "addons-221213"
	I0417 18:00:07.645530   83763 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-221213"
	I0417 18:00:07.645533   83763 addons.go:69] Setting volumesnapshots=true in profile "addons-221213"
	I0417 18:00:07.645545   83763 addons.go:69] Setting ingress=true in profile "addons-221213"
	I0417 18:00:07.647085   83763 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-221213"
	I0417 18:00:07.647393   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.647490   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.647528   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.647534   83763 addons.go:234] Setting addon ingress-dns=true in "addons-221213"
	I0417 18:00:07.647540   83763 addons.go:234] Setting addon storage-provisioner=true in "addons-221213"
	I0417 18:00:07.647490   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.647346   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.647593   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.647598   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.647614   83763 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-221213"
	I0417 18:00:07.647652   83763 addons.go:234] Setting addon volumesnapshots=true in "addons-221213"
	I0417 18:00:07.647655   83763 addons.go:234] Setting addon ingress=true in "addons-221213"
	I0417 18:00:07.647656   83763 addons.go:234] Setting addon metrics-server=true in "addons-221213"
	I0417 18:00:07.647226   83763 mustload.go:65] Loading cluster: addons-221213
	I0417 18:00:07.647844   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.647876   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.647893   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.647994   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648028   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648029   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648067   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648078   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.648137   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.647874   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648237   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648311   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648323   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648392   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648419   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648426   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648434   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648438   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.648472   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.648492   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648496   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648470   83763 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-221213"
	I0417 18:00:07.648602   83763 config.go:182] Loaded profile config "addons-221213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:00:07.648823   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648893   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.648825   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.648963   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.668500   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0417 18:00:07.668766   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42405
	I0417 18:00:07.668808   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0417 18:00:07.668978   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.669291   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.669525   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.669559   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.669765   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.669793   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.670171   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.670224   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.670436   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.670857   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.670905   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.671299   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0417 18:00:07.671516   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.671693   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.672495   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.672519   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.672674   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.672692   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.673149   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.673343   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.673771   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.673822   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.674650   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.674692   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.674798   83763 addons.go:234] Setting addon default-storageclass=true in "addons-221213"
	I0417 18:00:07.674840   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.679393   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0417 18:00:07.681180   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.681228   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.681237   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.681276   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.681328   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.685041   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.685070   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.685420   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.685461   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.686906   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.686997   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0417 18:00:07.687356   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.687861   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.687883   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.688528   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.688573   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.688808   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.689340   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.689380   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.707540   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0417 18:00:07.707691   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0417 18:00:07.708139   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.708245   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.708782   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.708803   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.708913   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.708938   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.709203   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.709412   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.709598   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.710371   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0417 18:00:07.710703   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.710734   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.712351   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I0417 18:00:07.712503   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.713159   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.713406   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.713425   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.713695   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.713768   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.713784   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.713844   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.715746   83763 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0417 18:00:07.714182   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.714683   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.716889   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0417 18:00:07.717266   83763 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0417 18:00:07.717282   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0417 18:00:07.717305   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.717402   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.717991   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.718026   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.718728   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I0417 18:00:07.718907   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.718985   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.719590   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.719607   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.719748   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.719771   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.720315   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.720319   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.720953   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.721286   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.721316   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.722665   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0417 18:00:07.723038   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.723351   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.723807   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.723828   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.724146   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.724711   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.724752   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.724990   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.724996   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.725006   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.725039   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.725199   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.727022   83763 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0417 18:00:07.725526   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.726023   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0417 18:00:07.728645   83763 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 18:00:07.728667   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0417 18:00:07.728686   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.728725   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.729333   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.729884   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.729901   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.730294   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.730935   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.730974   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.731178   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0417 18:00:07.731186   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0417 18:00:07.731621   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.731715   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.732146   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.732168   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.732303   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.732484   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.732634   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.732794   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.733197   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0417 18:00:07.737048   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.737572   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.737911   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.737927   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.738249   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.738278   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.738346   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.738864   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.738891   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.739189   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.739753   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.739779   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.741394   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.741419   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.741964   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.742180   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.744111   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.746440   83763 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0417 18:00:07.748194   83763 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0417 18:00:07.748220   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0417 18:00:07.748245   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.749468   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0417 18:00:07.749481   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0417 18:00:07.750043   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.750721   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.750762   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.751251   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.751319   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44165
	I0417 18:00:07.751618   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.752229   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.752268   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.752289   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.752513   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.752699   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.752803   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.752999   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.753193   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0417 18:00:07.753300   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.753597   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.753719   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.753733   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.756215   83763 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0417 18:00:07.754308   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.754707   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.758080   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0417 18:00:07.758974   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0417 18:00:07.759342   83763 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0417 18:00:07.759362   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0417 18:00:07.759384   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.760279   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.760392   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.760411   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.760450   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.760490   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0417 18:00:07.760535   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.761527   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.761569   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.761611   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.761628   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.761673   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.761763   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.761781   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.762281   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.762296   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.762299   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.762388   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.762398   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.762752   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.762857   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.762908   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0417 18:00:07.762925   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.763166   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.763222   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0417 18:00:07.763432   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.763643   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.763796   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.764926   83763 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-221213"
	I0417 18:00:07.764972   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.765289   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.765314   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.765549   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.765583   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.765548   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.765663   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.766266   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.766315   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.766334   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.766341   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.768395   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0417 18:00:07.766358   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.766384   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:07.766618   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.766645   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.766855   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.767218   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.770628   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.772093   83763 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0417 18:00:07.772160   83763 out.go:177]   - Using image docker.io/registry:2.8.3
	I0417 18:00:07.772840   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.773570   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.774897   83763 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0417 18:00:07.774915   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0417 18:00:07.774938   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.773071   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.773243   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0417 18:00:07.773511   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0417 18:00:07.773576   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.772864   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.774084   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.775580   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.777154   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.777723   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0417 18:00:07.777750   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.778848   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0417 18:00:07.778873   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.779142   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.779395   83763 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 18:00:07.779481   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.779598   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.780494   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0417 18:00:07.780606   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.781606   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.781614   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.780789   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.780949   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.781000   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.781626   83763 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0417 18:00:07.781641   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 18:00:07.781815   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.781828   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.781965   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.783266   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0417 18:00:07.783324   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.786242   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0417 18:00:07.783639   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.783681   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.783769   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.783943   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.785127   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.787861   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.787922   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.787982   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.787990   83763 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0417 18:00:07.788156   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.788651   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.789337   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0417 18:00:07.789346   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.789779   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.791094   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0417 18:00:07.791105   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0417 18:00:07.791219   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.791517   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.791670   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.791693   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.792853   83763 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0417 18:00:07.792873   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.794304   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0417 18:00:07.794404   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.794488   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.794655   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.796351   83763 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 18:00:07.796370   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0417 18:00:07.797324   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.797910   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.797958   83763 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0417 18:00:07.798572   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.799721   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.799785   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.799818   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0417 18:00:07.799954   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.800017   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0417 18:00:07.800050   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0417 18:00:07.800176   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.801057   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.801453   83763 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0417 18:00:07.801472   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.802957   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.803040   83763 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:00:07.803352   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.803499   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.804069   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.804610   83763 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0417 18:00:07.804807   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.804825   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 18:00:07.806144   83763 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 18:00:07.806157   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0417 18:00:07.806159   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.806175   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.807605   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0417 18:00:07.804854   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.807655   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.805131   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.809124   83763 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0417 18:00:07.805429   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.806764   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.807626   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0417 18:00:07.807923   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.808981   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.809514   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.810461   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.810468   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.812131   83763 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 18:00:07.810708   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.810722   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.810798   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.811168   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.811186   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.811187   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.811181   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.811205   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.811221   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.815552   83763 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 18:00:07.813656   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.813793   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.813813   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.813866   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.813888   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.814145   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.814515   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.816418   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I0417 18:00:07.817035   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.817411   83763 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 18:00:07.817436   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0417 18:00:07.817454   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.817455   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.817568   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.817704   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.817843   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:07.817894   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:07.817958   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.817973   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.818004   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.818123   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.818292   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.818317   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.818334   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.818628   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.818629   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.818665   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.818888   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.818888   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.819069   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.820858   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.821214   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.821246   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.821441   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.821573   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.821714   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.821822   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:07.840248   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0417 18:00:07.840700   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:07.841233   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:07.841258   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:07.841607   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:07.841806   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:07.843444   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:07.845711   83763 out.go:177]   - Using image docker.io/busybox:stable
	I0417 18:00:07.847292   83763 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0417 18:00:07.848929   83763 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 18:00:07.848949   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0417 18:00:07.848968   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:07.852252   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.852554   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:07.852582   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:07.852826   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:07.853028   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:07.853220   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:07.853391   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:08.292669   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0417 18:00:08.307834   83763 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0417 18:00:08.307872   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0417 18:00:08.308172   83763 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0417 18:00:08.308191   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0417 18:00:08.319946   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 18:00:08.329452   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0417 18:00:08.344347   83763 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0417 18:00:08.344375   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0417 18:00:08.346373   83763 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0417 18:00:08.346389   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0417 18:00:08.363330   83763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:00:08.363401   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0417 18:00:08.365084   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0417 18:00:08.370630   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0417 18:00:08.373531   83763 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0417 18:00:08.373555   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0417 18:00:08.374911   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:00:08.376367   83763 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0417 18:00:08.376382   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0417 18:00:08.377750   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0417 18:00:08.377769   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0417 18:00:08.441956   83763 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0417 18:00:08.441980   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0417 18:00:08.456638   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0417 18:00:08.517963   83763 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0417 18:00:08.517990   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0417 18:00:08.540826   83763 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0417 18:00:08.540857   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0417 18:00:08.641531   83763 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0417 18:00:08.641557   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0417 18:00:08.658949   83763 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0417 18:00:08.658985   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0417 18:00:08.673899   83763 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0417 18:00:08.673925   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0417 18:00:08.687623   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0417 18:00:08.687656   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0417 18:00:08.734822   83763 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0417 18:00:08.734858   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0417 18:00:08.739655   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0417 18:00:08.807217   83763 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 18:00:08.807241   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0417 18:00:08.808648   83763 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0417 18:00:08.808664   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0417 18:00:08.855419   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0417 18:00:08.884019   83763 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0417 18:00:08.884047   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0417 18:00:08.895734   83763 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0417 18:00:08.895762   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0417 18:00:08.961414   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0417 18:00:08.961438   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0417 18:00:08.971062   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0417 18:00:08.985507   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0417 18:00:08.985537   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0417 18:00:09.045189   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0417 18:00:09.209337   83763 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0417 18:00:09.209367   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0417 18:00:09.211146   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0417 18:00:09.211164   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0417 18:00:09.275331   83763 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 18:00:09.275357   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0417 18:00:09.461726   83763 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0417 18:00:09.461760   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0417 18:00:09.470032   83763 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0417 18:00:09.470058   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0417 18:00:09.685762   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 18:00:09.693946   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0417 18:00:09.693976   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0417 18:00:09.707668   83763 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0417 18:00:09.707693   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0417 18:00:09.989572   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0417 18:00:09.989601   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0417 18:00:10.030004   83763 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 18:00:10.030036   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0417 18:00:10.290401   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0417 18:00:10.290435   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0417 18:00:10.341344   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0417 18:00:10.778028   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0417 18:00:10.778057   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0417 18:00:11.109601   83763 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 18:00:11.109645   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0417 18:00:11.498373   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0417 18:00:12.184601   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.891884923s)
	I0417 18:00:12.184637   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.864653919s)
	I0417 18:00:12.184674   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:12.184680   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:12.184689   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:12.184690   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:12.185097   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:12.185102   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:12.185163   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:12.185172   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:12.185180   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:12.185102   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:12.185107   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:12.185281   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:12.185293   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:12.185307   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:12.185402   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:12.185414   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:12.185656   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:12.185670   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:12.205788   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:12.205812   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:12.206293   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:12.206325   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:12.206347   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:14.885126   83763 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0417 18:00:14.885167   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:14.888018   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:14.888414   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:14.888447   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:14.888639   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:14.888891   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:14.889073   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:14.889220   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:15.529046   83763 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0417 18:00:15.674127   83763 addons.go:234] Setting addon gcp-auth=true in "addons-221213"
	I0417 18:00:15.674191   83763 host.go:66] Checking if "addons-221213" exists ...
	I0417 18:00:15.674758   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:15.674807   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:15.689932   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0417 18:00:15.690420   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:15.690987   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:15.691020   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:15.691348   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:15.691796   83763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:00:15.691833   83763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:00:15.707339   83763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0417 18:00:15.707847   83763 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:00:15.708307   83763 main.go:141] libmachine: Using API Version  1
	I0417 18:00:15.708329   83763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:00:15.708636   83763 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:00:15.708853   83763 main.go:141] libmachine: (addons-221213) Calling .GetState
	I0417 18:00:15.710375   83763 main.go:141] libmachine: (addons-221213) Calling .DriverName
	I0417 18:00:15.710632   83763 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0417 18:00:15.710663   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHHostname
	I0417 18:00:15.713057   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:15.713486   83763 main.go:141] libmachine: (addons-221213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:2d:a2", ip: ""} in network mk-addons-221213: {Iface:virbr1 ExpiryTime:2024-04-17 18:59:29 +0000 UTC Type:0 Mac:52:54:00:ca:2d:a2 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-221213 Clientid:01:52:54:00:ca:2d:a2}
	I0417 18:00:15.713516   83763 main.go:141] libmachine: (addons-221213) DBG | domain addons-221213 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:2d:a2 in network mk-addons-221213
	I0417 18:00:15.713708   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHPort
	I0417 18:00:15.713870   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHKeyPath
	I0417 18:00:15.714040   83763 main.go:141] libmachine: (addons-221213) Calling .GetSSHUsername
	I0417 18:00:15.714172   83763 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/addons-221213/id_rsa Username:docker}
	I0417 18:00:16.215082   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.885587817s)
	I0417 18:00:16.215133   83763 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.85168915s)
	I0417 18:00:16.215146   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215161   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215164   83763 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0417 18:00:16.215208   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.85009521s)
	I0417 18:00:16.215147   83763 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.851788461s)
	I0417 18:00:16.215255   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215253   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.844593356s)
	I0417 18:00:16.215300   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215315   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215321   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.840391289s)
	I0417 18:00:16.215342   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215354   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215267   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215424   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.215430   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.215439   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.215447   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215458   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215461   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758800029s)
	I0417 18:00:16.215475   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215483   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215547   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.475865401s)
	I0417 18:00:16.215561   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215569   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215632   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.36018152s)
	I0417 18:00:16.215650   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.215660   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.215669   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215676   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215701   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.244609741s)
	I0417 18:00:16.215717   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215723   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215725   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215731   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215760   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.215791   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.215793   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.170568425s)
	I0417 18:00:16.215798   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.215805   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215807   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.215812   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215816   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.215941   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.530136638s)
	W0417 18:00:16.215966   83763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 18:00:16.215984   83763 retry.go:31] will retry after 307.556952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0417 18:00:16.216083   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.8746854s)
	I0417 18:00:16.216107   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.216116   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.216193   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.216201   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.216259   83763 node_ready.go:35] waiting up to 6m0s for node "addons-221213" to be "Ready" ...
	I0417 18:00:16.216494   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.216524   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.216532   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.216540   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.216547   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.216604   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.216624   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.216630   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.216637   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.216644   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.216686   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.216706   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.216713   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.216720   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.216726   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.217416   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.217443   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.217450   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.217459   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.217465   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.217507   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.217524   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.217530   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.217537   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.217543   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.217785   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.217809   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.217816   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.217834   83763 addons.go:470] Verifying addon metrics-server=true in "addons-221213"
	I0417 18:00:16.219859   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.219868   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.219879   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.219886   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.219912   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.219919   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.220013   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.220034   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.220054   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220061   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.220069   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.220075   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.220197   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220207   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.220733   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.220764   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220783   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.223538   83763 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-221213 service yakd-dashboard -n yakd-dashboard
	
	I0417 18:00:16.220853   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220875   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.220901   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220916   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.220938   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.220952   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.221007   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.225016   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.225030   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.225018   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.225027   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.225040   83763 addons.go:470] Verifying addon ingress=true in "addons-221213"
	I0417 18:00:16.225105   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.225114   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.226755   83763 out.go:177] * Verifying ingress addon...
	I0417 18:00:16.225456   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.225481   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:16.228249   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.228270   83763 addons.go:470] Verifying addon registry=true in "addons-221213"
	I0417 18:00:16.229769   83763 out.go:177] * Verifying registry addon...
	I0417 18:00:16.228997   83763 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0417 18:00:16.231957   83763 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0417 18:00:16.283690   83763 node_ready.go:49] node "addons-221213" has status "Ready":"True"
	I0417 18:00:16.283722   83763 node_ready.go:38] duration metric: took 67.442658ms for node "addons-221213" to be "Ready" ...
	I0417 18:00:16.283737   83763 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:00:16.286886   83763 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0417 18:00:16.286909   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:16.287843   83763 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0417 18:00:16.287864   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:16.313412   83763 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqs49" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:16.326496   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:16.326521   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:16.326960   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:16.326983   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:16.361186   83763 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqs49" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:16.361213   83763 pod_ready.go:81] duration metric: took 47.771128ms for pod "coredns-7db6d8ff4d-cqs49" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:16.361223   83763 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:16.524441   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0417 18:00:16.720663   83763 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-221213" context rescaled to 1 replicas
	I0417 18:00:16.739342   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:16.742997   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:17.237560   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:17.237919   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:17.737831   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:17.738309   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:18.242390   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:18.255817   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:18.339994   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.841557884s)
	I0417 18:00:18.340057   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:18.340075   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:18.340080   83763 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.629419571s)
	I0417 18:00:18.341963   83763 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0417 18:00:18.340409   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:18.340440   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:18.343448   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:18.343471   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:18.343484   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:18.344895   83763 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0417 18:00:18.343748   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:18.343784   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:18.346400   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:18.346414   83763 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0417 18:00:18.346427   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0417 18:00:18.346430   83763 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-221213"
	I0417 18:00:18.348090   83763 out.go:177] * Verifying csi-hostpath-driver addon...
	I0417 18:00:18.350630   83763 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0417 18:00:18.441120   83763 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0417 18:00:18.441144   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:18.454692   83763 pod_ready.go:102] pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:18.572745   83763 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0417 18:00:18.572805   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0417 18:00:18.741446   83763 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 18:00:18.741467   83763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0417 18:00:18.743657   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:18.744735   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:18.844397   83763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0417 18:00:18.858563   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:19.239589   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:19.241771   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:19.329907   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.805408745s)
	I0417 18:00:19.329977   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:19.329997   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:19.330274   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:19.330292   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:19.330302   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:19.330325   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:19.330387   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:19.330631   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:19.330647   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:19.356832   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:19.737251   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:19.742592   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:19.861305   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:20.141660   83763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.297218783s)
	I0417 18:00:20.141714   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:20.141736   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:20.142064   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:20.142082   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:20.142081   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:20.142096   83763 main.go:141] libmachine: Making call to close driver server
	I0417 18:00:20.142104   83763 main.go:141] libmachine: (addons-221213) Calling .Close
	I0417 18:00:20.142367   83763 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:00:20.142390   83763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:00:20.142398   83763 main.go:141] libmachine: (addons-221213) DBG | Closing plugin on server side
	I0417 18:00:20.143599   83763 addons.go:470] Verifying addon gcp-auth=true in "addons-221213"
	I0417 18:00:20.145548   83763 out.go:177] * Verifying gcp-auth addon...
	I0417 18:00:20.148183   83763 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0417 18:00:20.170193   83763 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0417 18:00:20.170215   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:20.236544   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:20.237111   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:20.356088   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:20.652086   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:20.735314   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:20.739439   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:20.859020   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:20.868093   83763 pod_ready.go:102] pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:21.152608   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:21.237514   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:21.238059   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:21.368059   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:21.653034   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:21.738430   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:21.738589   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:21.859855   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:22.152223   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:22.238207   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:22.238570   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:22.356727   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:22.651910   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:22.737885   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:22.738189   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:22.856872   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:22.868412   83763 pod_ready.go:102] pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:23.152114   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:23.236364   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:23.237029   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:23.357418   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:23.367877   83763 pod_ready.go:97] pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.199 HostIPs:[{IP:192.168.39
.199}] PodIP: PodIPs:[] StartTime:2024-04-17 18:00:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-17 18:00:11 +0000 UTC,FinishedAt:2024-04-17 18:00:21 +0000 UTC,ContainerID:cri-o://93a82e8ca00def14984dc4caf6e1be7b5fd1451ad64a8b14544963c3ea786042,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://93a82e8ca00def14984dc4caf6e1be7b5fd1451ad64a8b14544963c3ea786042 Started:0xc0032cb640 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0417 18:00:23.367954   83763 pod_ready.go:81] duration metric: took 7.006721849s for pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace to be "Ready" ...
	E0417 18:00:23.367977   83763 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-ln4cv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-17 18:00:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.199 HostIPs:[{IP:192.168.39.199}] PodIP: PodIPs:[] StartTime:2024-04-17 18:00:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-17 18:00:11 +0000 UTC,FinishedAt:2024-04-17 18:00:21 +0000 UTC,ContainerID:cri-o://93a82e8ca00def14984dc4caf6e1be7b5fd1451ad64a8b14544963c3ea786042,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://93a82e8ca00def14984dc4caf6e1be7b5fd1451ad64a8b14544963c3ea786042 Started:0xc0032cb640 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0417 18:00:23.367989   83763 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.373119   83763 pod_ready.go:92] pod "etcd-addons-221213" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:23.373142   83763 pod_ready.go:81] duration metric: took 5.137395ms for pod "etcd-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.373155   83763 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.377380   83763 pod_ready.go:92] pod "kube-apiserver-addons-221213" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:23.377406   83763 pod_ready.go:81] duration metric: took 4.240133ms for pod "kube-apiserver-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.377418   83763 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.382948   83763 pod_ready.go:92] pod "kube-controller-manager-addons-221213" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:23.382970   83763 pod_ready.go:81] duration metric: took 5.544358ms for pod "kube-controller-manager-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.382985   83763 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hhwq6" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.394692   83763 pod_ready.go:92] pod "kube-proxy-hhwq6" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:23.394719   83763 pod_ready.go:81] duration metric: took 11.726344ms for pod "kube-proxy-hhwq6" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.394731   83763 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.652349   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:23.738280   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:23.743410   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:23.766504   83763 pod_ready.go:92] pod "kube-scheduler-addons-221213" in "kube-system" namespace has status "Ready":"True"
	I0417 18:00:23.766536   83763 pod_ready.go:81] duration metric: took 371.795804ms for pod "kube-scheduler-addons-221213" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.766550   83763 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace to be "Ready" ...
	I0417 18:00:23.856746   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:24.152093   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:24.238344   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:24.238348   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:24.358730   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:24.651450   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:24.737990   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:24.737996   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:24.857213   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:25.152132   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:25.236239   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:25.237443   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:25.360704   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:25.651535   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:25.739126   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:25.740067   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:25.772535   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:25.856372   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:26.152744   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:26.236414   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:26.238344   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:26.356400   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:26.652658   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:26.737046   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:26.737678   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:26.857154   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:27.152211   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:27.250970   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:27.251473   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:27.357424   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:27.652486   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:27.747502   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:27.747526   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:27.786464   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:27.857215   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:28.153301   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:28.238383   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:28.238898   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:28.357017   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:28.652139   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:28.738716   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:28.739401   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:28.858027   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:29.153078   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:29.235956   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:29.237043   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:29.356839   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:29.652315   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:29.737396   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:29.737532   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:29.856596   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:30.152344   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:30.246424   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:30.246720   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:30.272664   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:30.356296   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:30.652155   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:30.743583   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:30.743758   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:30.860699   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:31.152721   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:31.235718   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:31.238445   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:31.361503   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:31.775438   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:31.775610   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:31.775835   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:31.856414   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:32.151912   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:32.240274   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:32.240966   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:32.273512   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:32.429470   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:32.651705   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:32.737294   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:32.739302   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:32.856561   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:33.153159   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:33.236722   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:33.236976   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:33.359820   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:33.652468   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:33.736168   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:33.737294   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:33.861076   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:34.153660   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:34.238316   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:34.238826   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:34.273838   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:34.356783   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:34.652326   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:34.737848   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:34.739018   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:34.858456   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:35.157412   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:35.238317   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:35.239774   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:35.357723   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:35.652592   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:35.736038   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:35.738024   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:35.856277   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:36.152629   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:36.238652   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:36.239329   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:36.280243   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:36.356398   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:36.652186   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:36.737370   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:36.739012   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:36.858418   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:37.152424   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:37.236478   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:37.239423   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:37.356212   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:37.657137   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:37.738914   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:37.739417   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:37.856468   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:38.157820   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:38.236024   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:38.239292   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:38.356971   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:38.652503   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:38.744940   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:38.745533   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:38.780706   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:38.857185   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:39.152271   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:39.248233   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:39.249608   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:39.360348   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:39.652026   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:39.742547   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:39.744865   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:39.856436   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:40.152349   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:40.236537   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:40.238646   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:40.356868   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:40.653044   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:40.737007   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:40.737968   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:40.858084   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:41.152711   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:41.235819   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:41.238039   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:41.274018   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:41.358377   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:41.652564   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:41.740887   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:41.745365   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:41.857534   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:42.152173   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:42.237552   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:42.238502   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:42.356330   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:42.652278   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:42.738576   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:42.738588   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:42.856058   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:43.151794   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:43.237011   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:43.238934   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:43.356000   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:43.651959   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:43.738523   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:43.738592   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:43.773543   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:44.419814   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:44.421638   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:44.425360   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:44.425737   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:44.429811   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:44.652849   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:44.744081   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:44.744696   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:44.856502   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:45.153001   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:45.240574   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:45.241998   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:45.357343   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:45.652416   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:45.742453   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:45.742605   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:45.858676   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:46.155011   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:46.236162   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:46.238848   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:46.274077   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:46.356517   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:46.652078   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:46.739461   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:46.739971   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:46.857208   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:47.152264   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:47.238351   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:47.238447   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:47.356556   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:47.651897   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:47.741952   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:47.743318   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:47.868762   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:48.152847   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:48.239492   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:48.239613   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:48.356560   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:48.652505   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:48.736580   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:48.738278   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:48.773861   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:48.858511   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:49.152486   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:49.237721   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:49.239235   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:49.356225   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:49.652686   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:49.737781   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:49.738324   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:49.865276   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:50.152926   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:50.236841   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:50.236972   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:50.359033   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:50.652402   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:50.739041   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:50.740734   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:50.856913   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:51.152493   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:51.236065   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:51.239024   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:51.274234   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:51.356689   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:51.652571   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:51.735872   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:51.737377   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:51.856277   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:52.151929   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:52.237650   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:52.237843   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:52.356807   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:52.933336   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:52.940395   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:52.941757   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:52.942601   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:53.152037   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:53.237112   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:53.237516   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:53.275465   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:53.356440   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:53.652408   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:53.735143   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:53.736959   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:53.856269   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:54.152455   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:54.237525   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:54.237830   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:54.356566   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:54.652667   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:54.742280   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:54.742316   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:54.856470   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:55.160509   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:55.237128   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:55.237871   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:55.357472   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:55.652283   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:55.740015   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:55.741813   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:55.773298   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:55.856221   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:56.152153   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:56.238683   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:56.239902   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:56.356418   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:56.652527   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:56.736020   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:56.737594   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0417 18:00:56.856340   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:57.152611   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:57.237103   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:57.238665   83763 kapi.go:107] duration metric: took 41.006701461s to wait for kubernetes.io/minikube-addons=registry ...
	I0417 18:00:57.356007   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:57.651971   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:57.736721   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:57.775763   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:00:57.856223   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:58.151918   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:58.236331   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:58.357810   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:58.652472   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:58.738887   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:58.856130   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:59.152140   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:59.235857   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:59.355957   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:00:59.651718   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:00:59.736639   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:00:59.856532   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:00.152356   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:00.236511   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:00.273761   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:00.357208   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:00.652049   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:00.736313   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:00.856870   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:01.152025   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:01.236308   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:01.357855   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:01.653193   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:01.739994   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:01.856724   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:02.153103   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:02.237265   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:02.274556   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:02.358378   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:02.654970   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:02.737040   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:02.858044   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:03.152644   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:03.236112   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:03.357287   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:03.652014   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:03.736491   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:03.856803   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:04.152881   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:04.235687   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:04.380805   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:04.652557   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:04.739048   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:04.774232   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:04.856828   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:05.153450   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:05.241351   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:05.356894   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:05.653198   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:05.738568   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:05.857463   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:06.152932   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:06.238636   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:06.356671   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:06.652295   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:06.736855   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:06.774785   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:06.858090   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:07.154162   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:07.236510   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:07.356999   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:07.652396   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:07.735666   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:07.856763   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:08.152946   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:08.235528   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:08.356285   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:08.652658   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:08.736978   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:08.856089   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:09.269191   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:09.269211   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:09.275228   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:09.356485   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:09.652957   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:09.736033   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:09.857632   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:10.152475   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:10.236078   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:10.359997   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:10.653062   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:10.739633   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:10.861330   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:11.151431   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:11.236190   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:11.284546   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:11.356880   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:11.651935   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:11.736848   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:11.855789   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:12.151818   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:12.236996   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:12.641211   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:12.657417   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:12.735811   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:12.870644   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:13.152113   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:13.236923   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:13.358136   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:13.652979   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:13.736138   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:13.774119   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:13.857139   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:14.152643   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:14.237379   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:14.361698   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:14.653717   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:14.739052   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:14.856859   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:15.152734   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:15.235861   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:15.357239   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:15.670130   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:15.738733   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:16.078754   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:16.080901   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:16.152694   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:16.239063   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:16.357510   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:16.652522   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:16.737041   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:16.856914   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:17.151436   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:17.236847   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:17.356730   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:17.657159   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:17.735780   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:17.858852   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:18.152663   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:18.236354   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:18.273685   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:18.358104   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:18.653896   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:18.742080   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:18.856511   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:19.152534   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:19.236577   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:19.365008   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:19.653397   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:19.735584   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:19.856924   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:20.151845   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:20.235593   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:20.276588   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:20.359132   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:20.652404   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:20.736762   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:20.857886   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:21.151722   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:21.236039   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:21.356824   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:21.661605   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:21.735569   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:21.857446   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:22.152766   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:22.236098   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:22.374439   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:22.652304   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:22.736315   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:22.773224   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:22.859009   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:23.153955   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:23.241947   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:23.819022   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:23.820843   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:23.821685   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:23.860943   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:24.152343   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:24.239997   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:24.356280   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:24.652291   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:24.737017   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:24.856316   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:25.152432   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:25.238518   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:25.279142   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:25.357382   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:25.652671   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:25.736065   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:25.856337   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:26.152224   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:26.236800   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:26.356352   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:26.652473   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:26.736457   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:26.858042   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:27.152542   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:27.236284   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:27.358032   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:27.679386   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:27.736862   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:27.772705   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:27.856498   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:28.152160   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:28.236491   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:28.356394   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:28.652325   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:28.736527   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:28.856345   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:29.155145   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:29.239285   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:29.358349   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:29.652600   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:29.736077   83763 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0417 18:01:29.774058   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:29.856419   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:30.151902   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:30.237162   83763 kapi.go:107] duration metric: took 1m14.008154314s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0417 18:01:30.356973   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:30.652194   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:30.856198   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:31.151870   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:31.356671   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:31.654533   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:31.774270   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:31.861947   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:32.153471   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:32.357588   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:32.652835   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:32.857157   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:33.153120   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0417 18:01:33.356433   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:33.652597   83763 kapi.go:107] duration metric: took 1m13.504409599s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0417 18:01:33.654541   83763 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-221213 cluster.
	I0417 18:01:33.656091   83763 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0417 18:01:33.657681   83763 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0417 18:01:33.774739   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:33.857192   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:34.356590   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:34.856752   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:35.357445   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:35.856661   83763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0417 18:01:36.278322   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:36.357020   83763 kapi.go:107] duration metric: took 1m18.006387102s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0417 18:01:36.358986   83763 out.go:177] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, metrics-server, nvidia-device-plugin, storage-provisioner, helm-tiller, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0417 18:01:36.360924   83763 addons.go:505] duration metric: took 1m28.717457096s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner metrics-server nvidia-device-plugin storage-provisioner helm-tiller yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0417 18:01:38.287976   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:40.774742   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:42.775356   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:45.273347   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:47.276460   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:49.776859   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:52.273411   83763 pod_ready.go:102] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"False"
	I0417 18:01:54.272455   83763 pod_ready.go:92] pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace has status "Ready":"True"
	I0417 18:01:54.272478   83763 pod_ready.go:81] duration metric: took 1m30.505919707s for pod "metrics-server-c59844bb4-qrcnl" in "kube-system" namespace to be "Ready" ...
	I0417 18:01:54.272488   83763 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fvfbv" in "kube-system" namespace to be "Ready" ...
	I0417 18:01:54.277922   83763 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-fvfbv" in "kube-system" namespace has status "Ready":"True"
	I0417 18:01:54.277947   83763 pod_ready.go:81] duration metric: took 5.45011ms for pod "nvidia-device-plugin-daemonset-fvfbv" in "kube-system" namespace to be "Ready" ...
	I0417 18:01:54.277970   83763 pod_ready.go:38] duration metric: took 1m37.994219501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:01:54.277995   83763 api_server.go:52] waiting for apiserver process to appear ...
	I0417 18:01:54.278035   83763 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 18:01:54.278110   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 18:01:54.336956   83763 cri.go:89] found id: "cf9babaae1a3ac7854b2b01c5cdd905e506f2a809adc62ab6db3c3f5d8a9f879"
	I0417 18:01:54.336979   83763 cri.go:89] found id: ""
	I0417 18:01:54.336988   83763 logs.go:276] 1 containers: [cf9babaae1a3ac7854b2b01c5cdd905e506f2a809adc62ab6db3c3f5d8a9f879]
	I0417 18:01:54.337046   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.342533   83763 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 18:01:54.342640   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 18:01:54.401031   83763 cri.go:89] found id: "9987b117a2003402fe49a7e5bb20fa910affa92366b5451e3f242596465464f2"
	I0417 18:01:54.401057   83763 cri.go:89] found id: ""
	I0417 18:01:54.401069   83763 logs.go:276] 1 containers: [9987b117a2003402fe49a7e5bb20fa910affa92366b5451e3f242596465464f2]
	I0417 18:01:54.401128   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.405543   83763 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 18:01:54.405612   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 18:01:54.446219   83763 cri.go:89] found id: "983bd1fea4fdd26bbd10c7790739492ea107dfa78e991e2d21983a7601b56bcb"
	I0417 18:01:54.446243   83763 cri.go:89] found id: ""
	I0417 18:01:54.446257   83763 logs.go:276] 1 containers: [983bd1fea4fdd26bbd10c7790739492ea107dfa78e991e2d21983a7601b56bcb]
	I0417 18:01:54.446303   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.451627   83763 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 18:01:54.451701   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 18:01:54.505930   83763 cri.go:89] found id: "6b0466fba163eb466c2b80ef179cc96b60666c80b6ce424a005b9b70efff33e8"
	I0417 18:01:54.505957   83763 cri.go:89] found id: ""
	I0417 18:01:54.505968   83763 logs.go:276] 1 containers: [6b0466fba163eb466c2b80ef179cc96b60666c80b6ce424a005b9b70efff33e8]
	I0417 18:01:54.506030   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.511603   83763 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 18:01:54.511672   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 18:01:54.549301   83763 cri.go:89] found id: "6dc692e41647ec1abe22ed7b48ffb0d2bdd6aabf6752584088a5557ad295a8a6"
	I0417 18:01:54.549324   83763 cri.go:89] found id: ""
	I0417 18:01:54.549332   83763 logs.go:276] 1 containers: [6dc692e41647ec1abe22ed7b48ffb0d2bdd6aabf6752584088a5557ad295a8a6]
	I0417 18:01:54.549384   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.553919   83763 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 18:01:54.553979   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 18:01:54.596738   83763 cri.go:89] found id: "2c7aecf3c666f94b144db488c1a1e3ab01e25f9c5ce2d44c0b579223b277ac76"
	I0417 18:01:54.596760   83763 cri.go:89] found id: ""
	I0417 18:01:54.596782   83763 logs.go:276] 1 containers: [2c7aecf3c666f94b144db488c1a1e3ab01e25f9c5ce2d44c0b579223b277ac76]
	I0417 18:01:54.596836   83763 ssh_runner.go:195] Run: which crictl
	I0417 18:01:54.601298   83763 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 18:01:54.601359   83763 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 18:01:54.658594   83763 cri.go:89] found id: ""
	I0417 18:01:54.658626   83763 logs.go:276] 0 containers: []
	W0417 18:01:54.658638   83763 logs.go:278] No container was found matching "kindnet"
	I0417 18:01:54.658649   83763 logs.go:123] Gathering logs for kube-apiserver [cf9babaae1a3ac7854b2b01c5cdd905e506f2a809adc62ab6db3c3f5d8a9f879] ...
	I0417 18:01:54.658670   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf9babaae1a3ac7854b2b01c5cdd905e506f2a809adc62ab6db3c3f5d8a9f879"
	I0417 18:01:54.712438   83763 logs.go:123] Gathering logs for kube-proxy [6dc692e41647ec1abe22ed7b48ffb0d2bdd6aabf6752584088a5557ad295a8a6] ...
	I0417 18:01:54.712471   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc692e41647ec1abe22ed7b48ffb0d2bdd6aabf6752584088a5557ad295a8a6"
	I0417 18:01:54.760822   83763 logs.go:123] Gathering logs for kube-controller-manager [2c7aecf3c666f94b144db488c1a1e3ab01e25f9c5ce2d44c0b579223b277ac76] ...
	I0417 18:01:54.760862   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c7aecf3c666f94b144db488c1a1e3ab01e25f9c5ce2d44c0b579223b277ac76"
	I0417 18:01:54.827097   83763 logs.go:123] Gathering logs for container status ...
	I0417 18:01:54.827137   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0417 18:01:54.882084   83763 logs.go:123] Gathering logs for describe nodes ...
	I0417 18:01:54.882126   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0417 18:01:55.026554   83763 logs.go:123] Gathering logs for dmesg ...
	I0417 18:01:55.026589   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 18:01:55.046763   83763 logs.go:123] Gathering logs for etcd [9987b117a2003402fe49a7e5bb20fa910affa92366b5451e3f242596465464f2] ...
	I0417 18:01:55.046804   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9987b117a2003402fe49a7e5bb20fa910affa92366b5451e3f242596465464f2"
	I0417 18:01:55.103680   83763 logs.go:123] Gathering logs for coredns [983bd1fea4fdd26bbd10c7790739492ea107dfa78e991e2d21983a7601b56bcb] ...
	I0417 18:01:55.103715   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 983bd1fea4fdd26bbd10c7790739492ea107dfa78e991e2d21983a7601b56bcb"
	I0417 18:01:55.149009   83763 logs.go:123] Gathering logs for kube-scheduler [6b0466fba163eb466c2b80ef179cc96b60666c80b6ce424a005b9b70efff33e8] ...
	I0417 18:01:55.149047   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b0466fba163eb466c2b80ef179cc96b60666c80b6ce424a005b9b70efff33e8"
	I0417 18:01:55.199643   83763 logs.go:123] Gathering logs for CRI-O ...
	I0417 18:01:55.199680   83763 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-linux-amd64 start -p addons-221213 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 node stop m02 -v=7 --alsologtostderr
E0417 18:54:00.282952   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:54:41.243382   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.504725383s)

                                                
                                                
-- stdout --
	* Stopping node "ha-467706-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:53:56.269263   99782 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:53:56.269380   99782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:53:56.269389   99782 out.go:304] Setting ErrFile to fd 2...
	I0417 18:53:56.269394   99782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:53:56.269594   99782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:53:56.269850   99782 mustload.go:65] Loading cluster: ha-467706
	I0417 18:53:56.270212   99782 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:53:56.270234   99782 stop.go:39] StopHost: ha-467706-m02
	I0417 18:53:56.270585   99782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:53:56.270634   99782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:53:56.287792   99782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I0417 18:53:56.288354   99782 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:53:56.289090   99782 main.go:141] libmachine: Using API Version  1
	I0417 18:53:56.289117   99782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:53:56.289581   99782 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:53:56.291763   99782 out.go:177] * Stopping node "ha-467706-m02"  ...
	I0417 18:53:56.293124   99782 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0417 18:53:56.293174   99782 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:53:56.293457   99782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0417 18:53:56.293506   99782 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:53:56.296960   99782 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:53:56.297514   99782 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:53:56.297553   99782 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:53:56.297759   99782 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:53:56.297961   99782 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:53:56.298139   99782 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:53:56.298460   99782 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:53:56.387906   99782 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0417 18:53:56.445611   99782 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0417 18:53:56.503252   99782 main.go:141] libmachine: Stopping "ha-467706-m02"...
	I0417 18:53:56.503292   99782 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:53:56.505182   99782 main.go:141] libmachine: (ha-467706-m02) Calling .Stop
	I0417 18:53:56.509346   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 0/120
	I0417 18:53:57.511464   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 1/120
	I0417 18:53:58.513862   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 2/120
	I0417 18:53:59.515894   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 3/120
	I0417 18:54:00.517409   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 4/120
	I0417 18:54:01.519214   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 5/120
	I0417 18:54:02.520801   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 6/120
	I0417 18:54:03.522468   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 7/120
	I0417 18:54:04.524001   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 8/120
	I0417 18:54:05.525463   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 9/120
	I0417 18:54:06.527861   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 10/120
	I0417 18:54:07.529741   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 11/120
	I0417 18:54:08.531101   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 12/120
	I0417 18:54:09.532462   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 13/120
	I0417 18:54:10.534121   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 14/120
	I0417 18:54:11.535488   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 15/120
	I0417 18:54:12.536847   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 16/120
	I0417 18:54:13.538288   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 17/120
	I0417 18:54:14.539890   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 18/120
	I0417 18:54:15.541334   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 19/120
	I0417 18:54:16.543340   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 20/120
	I0417 18:54:17.544849   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 21/120
	I0417 18:54:18.546221   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 22/120
	I0417 18:54:19.547802   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 23/120
	I0417 18:54:20.549331   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 24/120
	I0417 18:54:21.551003   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 25/120
	I0417 18:54:22.552313   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 26/120
	I0417 18:54:23.553601   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 27/120
	I0417 18:54:24.555060   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 28/120
	I0417 18:54:25.557377   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 29/120
	I0417 18:54:26.559497   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 30/120
	I0417 18:54:27.561019   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 31/120
	I0417 18:54:28.562465   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 32/120
	I0417 18:54:29.564183   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 33/120
	I0417 18:54:30.565419   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 34/120
	I0417 18:54:31.567114   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 35/120
	I0417 18:54:32.569399   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 36/120
	I0417 18:54:33.570855   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 37/120
	I0417 18:54:34.572968   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 38/120
	I0417 18:54:35.574375   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 39/120
	I0417 18:54:36.575470   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 40/120
	I0417 18:54:37.576924   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 41/120
	I0417 18:54:38.578402   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 42/120
	I0417 18:54:39.580045   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 43/120
	I0417 18:54:40.581410   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 44/120
	I0417 18:54:41.583418   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 45/120
	I0417 18:54:42.584825   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 46/120
	I0417 18:54:43.586226   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 47/120
	I0417 18:54:44.587859   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 48/120
	I0417 18:54:45.589549   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 49/120
	I0417 18:54:46.591749   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 50/120
	I0417 18:54:47.593202   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 51/120
	I0417 18:54:48.595630   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 52/120
	I0417 18:54:49.596916   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 53/120
	I0417 18:54:50.598220   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 54/120
	I0417 18:54:51.600475   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 55/120
	I0417 18:54:52.602586   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 56/120
	I0417 18:54:53.604317   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 57/120
	I0417 18:54:54.605685   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 58/120
	I0417 18:54:55.607443   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 59/120
	I0417 18:54:56.609768   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 60/120
	I0417 18:54:57.611452   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 61/120
	I0417 18:54:58.612855   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 62/120
	I0417 18:54:59.614236   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 63/120
	I0417 18:55:00.615850   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 64/120
	I0417 18:55:01.617854   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 65/120
	I0417 18:55:02.619530   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 66/120
	I0417 18:55:03.621009   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 67/120
	I0417 18:55:04.623306   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 68/120
	I0417 18:55:05.625042   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 69/120
	I0417 18:55:06.626929   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 70/120
	I0417 18:55:07.628684   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 71/120
	I0417 18:55:08.630279   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 72/120
	I0417 18:55:09.632547   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 73/120
	I0417 18:55:10.633907   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 74/120
	I0417 18:55:11.636003   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 75/120
	I0417 18:55:12.637790   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 76/120
	I0417 18:55:13.639315   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 77/120
	I0417 18:55:14.640690   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 78/120
	I0417 18:55:15.642694   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 79/120
	I0417 18:55:16.644079   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 80/120
	I0417 18:55:17.646130   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 81/120
	I0417 18:55:18.647751   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 82/120
	I0417 18:55:19.649134   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 83/120
	I0417 18:55:20.650828   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 84/120
	I0417 18:55:21.653006   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 85/120
	I0417 18:55:22.654322   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 86/120
	I0417 18:55:23.655779   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 87/120
	I0417 18:55:24.657156   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 88/120
	I0417 18:55:25.658760   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 89/120
	I0417 18:55:26.660822   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 90/120
	I0417 18:55:27.662170   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 91/120
	I0417 18:55:28.663647   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 92/120
	I0417 18:55:29.665306   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 93/120
	I0417 18:55:30.667682   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 94/120
	I0417 18:55:31.669646   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 95/120
	I0417 18:55:32.671360   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 96/120
	I0417 18:55:33.672582   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 97/120
	I0417 18:55:34.674243   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 98/120
	I0417 18:55:35.676378   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 99/120
	I0417 18:55:36.678794   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 100/120
	I0417 18:55:37.680352   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 101/120
	I0417 18:55:38.682598   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 102/120
	I0417 18:55:39.683954   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 103/120
	I0417 18:55:40.686023   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 104/120
	I0417 18:55:41.688078   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 105/120
	I0417 18:55:42.689391   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 106/120
	I0417 18:55:43.691933   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 107/120
	I0417 18:55:44.693574   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 108/120
	I0417 18:55:45.695077   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 109/120
	I0417 18:55:46.697605   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 110/120
	I0417 18:55:47.699547   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 111/120
	I0417 18:55:48.701034   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 112/120
	I0417 18:55:49.703441   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 113/120
	I0417 18:55:50.704917   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 114/120
	I0417 18:55:51.706197   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 115/120
	I0417 18:55:52.707680   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 116/120
	I0417 18:55:53.710106   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 117/120
	I0417 18:55:54.712293   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 118/120
	I0417 18:55:55.713687   99782 main.go:141] libmachine: (ha-467706-m02) Waiting for machine to stop 119/120
	I0417 18:55:56.714245   99782 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0417 18:55:56.714467   99782 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-467706 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
E0417 18:56:03.164038   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (19.117911745s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:55:56.772138  100087 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:55:56.772290  100087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:55:56.772302  100087 out.go:304] Setting ErrFile to fd 2...
	I0417 18:55:56.772308  100087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:55:56.772537  100087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:55:56.772748  100087 out.go:298] Setting JSON to false
	I0417 18:55:56.772795  100087 mustload.go:65] Loading cluster: ha-467706
	I0417 18:55:56.772831  100087 notify.go:220] Checking for updates...
	I0417 18:55:56.774188  100087 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:55:56.774239  100087 status.go:255] checking status of ha-467706 ...
	I0417 18:55:56.775138  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:56.775217  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:56.790953  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0417 18:55:56.791377  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:56.792214  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:56.792243  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:56.792622  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:56.792830  100087 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:55:56.794736  100087 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:55:56.794757  100087 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:55:56.795082  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:56.795159  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:56.810262  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0417 18:55:56.810637  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:56.811096  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:56.811117  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:56.811411  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:56.811608  100087 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:55:56.814223  100087 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:55:56.814677  100087 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:55:56.814711  100087 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:55:56.814839  100087 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:55:56.815117  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:56.815157  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:56.830385  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40041
	I0417 18:55:56.830772  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:56.831179  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:56.831200  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:56.831626  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:56.831848  100087 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:55:56.832054  100087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:55:56.832082  100087 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:55:56.834751  100087 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:55:56.835113  100087 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:55:56.835136  100087 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:55:56.835318  100087 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:55:56.835507  100087 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:55:56.835679  100087 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:55:56.835809  100087 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:55:56.920293  100087 ssh_runner.go:195] Run: systemctl --version
	I0417 18:55:56.928101  100087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:55:56.947386  100087 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:55:56.947424  100087 api_server.go:166] Checking apiserver status ...
	I0417 18:55:56.947461  100087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:55:56.966181  100087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:55:56.978350  100087 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:55:56.978419  100087 ssh_runner.go:195] Run: ls
	I0417 18:55:56.983561  100087 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:55:56.987859  100087 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:55:56.987885  100087 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:55:56.987895  100087 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:55:56.987919  100087 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:55:56.988237  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:56.988278  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:57.004413  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42593
	I0417 18:55:57.004946  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:57.005536  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:57.005567  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:57.005951  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:57.006160  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:55:57.007798  100087 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:55:57.007818  100087 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:55:57.008106  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:57.008142  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:57.023441  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0417 18:55:57.023879  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:57.024467  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:57.024498  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:57.024959  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:57.025186  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:55:57.028106  100087 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:55:57.028484  100087 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:55:57.028521  100087 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:55:57.028702  100087 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:55:57.029064  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:55:57.029104  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:55:57.043779  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0417 18:55:57.044141  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:55:57.044628  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:55:57.044649  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:55:57.045053  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:55:57.045267  100087 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:55:57.045447  100087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:55:57.045467  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:55:57.047909  100087 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:55:57.048349  100087 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:55:57.048383  100087 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:55:57.048528  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:55:57.048711  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:55:57.048874  100087 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:55:57.049040  100087 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:15.437013  100087 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:15.437122  100087 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:15.437143  100087 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:15.437155  100087 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:15.437219  100087 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:15.437227  100087 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:15.437631  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.437707  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.453506  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0417 18:56:15.454025  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.454580  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.454610  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.454948  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.455157  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:15.456795  100087 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:15.456814  100087 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:15.457113  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.457185  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.472223  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
	I0417 18:56:15.472706  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.473304  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.473328  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.473753  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.473981  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:15.476591  100087 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:15.477034  100087 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:15.477062  100087 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:15.477166  100087 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:15.477521  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.477572  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.492617  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I0417 18:56:15.493124  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.493665  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.493688  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.494023  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.494244  100087 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:15.494473  100087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:15.494497  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:15.497473  100087 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:15.497892  100087 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:15.497919  100087 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:15.498111  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:15.498301  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:15.498499  100087 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:15.498689  100087 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:15.591845  100087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:15.612347  100087 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:15.612378  100087 api_server.go:166] Checking apiserver status ...
	I0417 18:56:15.612410  100087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:15.636598  100087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:15.648865  100087 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:15.648936  100087 ssh_runner.go:195] Run: ls
	I0417 18:56:15.654481  100087 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:15.661961  100087 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:15.661994  100087 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:15.662007  100087 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:15.662027  100087 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:15.662382  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.662433  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.677706  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0417 18:56:15.678241  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.678790  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.678820  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.679147  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.679345  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:15.681128  100087 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:15.681144  100087 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:15.681496  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.681535  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.696846  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0417 18:56:15.697318  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.697822  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.697848  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.698249  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.698440  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:15.701355  100087 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:15.701794  100087 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:15.701829  100087 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:15.701980  100087 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:15.702276  100087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:15.702327  100087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:15.718344  100087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0417 18:56:15.718813  100087 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:15.719368  100087 main.go:141] libmachine: Using API Version  1
	I0417 18:56:15.719388  100087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:15.719700  100087 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:15.719883  100087 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:15.720088  100087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:15.720111  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:15.722767  100087 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:15.723192  100087 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:15.723229  100087 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:15.723388  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:15.723545  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:15.723658  100087 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:15.723816  100087 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:15.811271  100087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:15.831819  100087 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-467706 -n ha-467706
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-467706 logs -n 25: (1.599575034s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m03_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m04 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp testdata/cp-test.txt                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m04_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03:/home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m03 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-467706 node stop m02 -v=7                                                     | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 18:49:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 18:49:22.621343   96006 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:49:22.621632   96006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:49:22.621642   96006 out.go:304] Setting ErrFile to fd 2...
	I0417 18:49:22.621647   96006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:49:22.621840   96006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:49:22.622485   96006 out.go:298] Setting JSON to false
	I0417 18:49:22.623337   96006 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9111,"bootTime":1713370652,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:49:22.623412   96006 start.go:139] virtualization: kvm guest
	I0417 18:49:22.625735   96006 out.go:177] * [ha-467706] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:49:22.627418   96006 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:49:22.629062   96006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:49:22.627413   96006 notify.go:220] Checking for updates...
	I0417 18:49:22.630766   96006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:49:22.632309   96006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:22.633783   96006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:49:22.635377   96006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:49:22.636911   96006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:49:22.672838   96006 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 18:49:22.674139   96006 start.go:297] selected driver: kvm2
	I0417 18:49:22.674151   96006 start.go:901] validating driver "kvm2" against <nil>
	I0417 18:49:22.674166   96006 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:49:22.674857   96006 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:49:22.674927   96006 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 18:49:22.690558   96006 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 18:49:22.690619   96006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 18:49:22.690882   96006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:49:22.691013   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:49:22.691030   96006 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0417 18:49:22.691039   96006 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 18:49:22.691121   96006 start.go:340] cluster config:
	{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:49:22.691296   96006 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:49:22.693487   96006 out.go:177] * Starting "ha-467706" primary control-plane node in "ha-467706" cluster
	I0417 18:49:22.694987   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:49:22.695039   96006 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 18:49:22.695047   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:49:22.695149   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:49:22.695174   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:49:22.695683   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:49:22.695728   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json: {Name:mk22dabb72a30759b87fd992aca98de3628495f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:22.695916   96006 start.go:360] acquireMachinesLock for ha-467706: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:49:22.695957   96006 start.go:364] duration metric: took 24.804µs to acquireMachinesLock for "ha-467706"
	I0417 18:49:22.695978   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:49:22.696064   96006 start.go:125] createHost starting for "" (driver="kvm2")
	I0417 18:49:22.697982   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:49:22.698141   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:49:22.698192   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:49:22.713521   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42335
	I0417 18:49:22.714905   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:49:22.715477   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:49:22.715503   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:49:22.715863   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:49:22.716093   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:22.716258   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:22.716442   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:49:22.716465   96006 client.go:168] LocalClient.Create starting
	I0417 18:49:22.716497   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:49:22.716531   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:49:22.716546   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:49:22.716601   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:49:22.716619   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:49:22.716649   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:49:22.716665   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:49:22.716674   96006 main.go:141] libmachine: (ha-467706) Calling .PreCreateCheck
	I0417 18:49:22.717100   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:22.717515   96006 main.go:141] libmachine: Creating machine...
	I0417 18:49:22.717530   96006 main.go:141] libmachine: (ha-467706) Calling .Create
	I0417 18:49:22.717669   96006 main.go:141] libmachine: (ha-467706) Creating KVM machine...
	I0417 18:49:22.719151   96006 main.go:141] libmachine: (ha-467706) DBG | found existing default KVM network
	I0417 18:49:22.719847   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:22.719679   96029 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0417 18:49:22.719880   96006 main.go:141] libmachine: (ha-467706) DBG | created network xml: 
	I0417 18:49:22.719893   96006 main.go:141] libmachine: (ha-467706) DBG | <network>
	I0417 18:49:22.719903   96006 main.go:141] libmachine: (ha-467706) DBG |   <name>mk-ha-467706</name>
	I0417 18:49:22.719910   96006 main.go:141] libmachine: (ha-467706) DBG |   <dns enable='no'/>
	I0417 18:49:22.719918   96006 main.go:141] libmachine: (ha-467706) DBG |   
	I0417 18:49:22.719930   96006 main.go:141] libmachine: (ha-467706) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0417 18:49:22.719948   96006 main.go:141] libmachine: (ha-467706) DBG |     <dhcp>
	I0417 18:49:22.719957   96006 main.go:141] libmachine: (ha-467706) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0417 18:49:22.719970   96006 main.go:141] libmachine: (ha-467706) DBG |     </dhcp>
	I0417 18:49:22.719982   96006 main.go:141] libmachine: (ha-467706) DBG |   </ip>
	I0417 18:49:22.719991   96006 main.go:141] libmachine: (ha-467706) DBG |   
	I0417 18:49:22.719999   96006 main.go:141] libmachine: (ha-467706) DBG | </network>
	I0417 18:49:22.720008   96006 main.go:141] libmachine: (ha-467706) DBG | 
	I0417 18:49:22.725332   96006 main.go:141] libmachine: (ha-467706) DBG | trying to create private KVM network mk-ha-467706 192.168.39.0/24...
	I0417 18:49:22.793280   96006 main.go:141] libmachine: (ha-467706) DBG | private KVM network mk-ha-467706 192.168.39.0/24 created
	I0417 18:49:22.793320   96006 main.go:141] libmachine: (ha-467706) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 ...
	I0417 18:49:22.793340   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:22.793255   96029 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:22.793359   96006 main.go:141] libmachine: (ha-467706) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:49:22.793504   96006 main.go:141] libmachine: (ha-467706) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:49:23.035075   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.034912   96029 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa...
	I0417 18:49:23.188428   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.188292   96029 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/ha-467706.rawdisk...
	I0417 18:49:23.188457   96006 main.go:141] libmachine: (ha-467706) DBG | Writing magic tar header
	I0417 18:49:23.188493   96006 main.go:141] libmachine: (ha-467706) DBG | Writing SSH key tar header
	I0417 18:49:23.188535   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.188433   96029 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 ...
	I0417 18:49:23.188563   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706
	I0417 18:49:23.188608   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 (perms=drwx------)
	I0417 18:49:23.188626   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:49:23.188635   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:49:23.188645   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:49:23.188657   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:49:23.188664   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:49:23.188698   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:49:23.188716   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:23.188722   96006 main.go:141] libmachine: (ha-467706) Creating domain...
	I0417 18:49:23.188733   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:49:23.188744   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:49:23.188761   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:49:23.188787   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home
	I0417 18:49:23.188798   96006 main.go:141] libmachine: (ha-467706) DBG | Skipping /home - not owner
	I0417 18:49:23.190017   96006 main.go:141] libmachine: (ha-467706) define libvirt domain using xml: 
	I0417 18:49:23.190033   96006 main.go:141] libmachine: (ha-467706) <domain type='kvm'>
	I0417 18:49:23.190039   96006 main.go:141] libmachine: (ha-467706)   <name>ha-467706</name>
	I0417 18:49:23.190044   96006 main.go:141] libmachine: (ha-467706)   <memory unit='MiB'>2200</memory>
	I0417 18:49:23.190049   96006 main.go:141] libmachine: (ha-467706)   <vcpu>2</vcpu>
	I0417 18:49:23.190054   96006 main.go:141] libmachine: (ha-467706)   <features>
	I0417 18:49:23.190059   96006 main.go:141] libmachine: (ha-467706)     <acpi/>
	I0417 18:49:23.190067   96006 main.go:141] libmachine: (ha-467706)     <apic/>
	I0417 18:49:23.190072   96006 main.go:141] libmachine: (ha-467706)     <pae/>
	I0417 18:49:23.190086   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190091   96006 main.go:141] libmachine: (ha-467706)   </features>
	I0417 18:49:23.190098   96006 main.go:141] libmachine: (ha-467706)   <cpu mode='host-passthrough'>
	I0417 18:49:23.190103   96006 main.go:141] libmachine: (ha-467706)   
	I0417 18:49:23.190110   96006 main.go:141] libmachine: (ha-467706)   </cpu>
	I0417 18:49:23.190115   96006 main.go:141] libmachine: (ha-467706)   <os>
	I0417 18:49:23.190136   96006 main.go:141] libmachine: (ha-467706)     <type>hvm</type>
	I0417 18:49:23.190145   96006 main.go:141] libmachine: (ha-467706)     <boot dev='cdrom'/>
	I0417 18:49:23.190149   96006 main.go:141] libmachine: (ha-467706)     <boot dev='hd'/>
	I0417 18:49:23.190154   96006 main.go:141] libmachine: (ha-467706)     <bootmenu enable='no'/>
	I0417 18:49:23.190161   96006 main.go:141] libmachine: (ha-467706)   </os>
	I0417 18:49:23.190166   96006 main.go:141] libmachine: (ha-467706)   <devices>
	I0417 18:49:23.190182   96006 main.go:141] libmachine: (ha-467706)     <disk type='file' device='cdrom'>
	I0417 18:49:23.190196   96006 main.go:141] libmachine: (ha-467706)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/boot2docker.iso'/>
	I0417 18:49:23.190209   96006 main.go:141] libmachine: (ha-467706)       <target dev='hdc' bus='scsi'/>
	I0417 18:49:23.190242   96006 main.go:141] libmachine: (ha-467706)       <readonly/>
	I0417 18:49:23.190268   96006 main.go:141] libmachine: (ha-467706)     </disk>
	I0417 18:49:23.190284   96006 main.go:141] libmachine: (ha-467706)     <disk type='file' device='disk'>
	I0417 18:49:23.190297   96006 main.go:141] libmachine: (ha-467706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:49:23.190312   96006 main.go:141] libmachine: (ha-467706)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/ha-467706.rawdisk'/>
	I0417 18:49:23.190324   96006 main.go:141] libmachine: (ha-467706)       <target dev='hda' bus='virtio'/>
	I0417 18:49:23.190351   96006 main.go:141] libmachine: (ha-467706)     </disk>
	I0417 18:49:23.190371   96006 main.go:141] libmachine: (ha-467706)     <interface type='network'>
	I0417 18:49:23.190381   96006 main.go:141] libmachine: (ha-467706)       <source network='mk-ha-467706'/>
	I0417 18:49:23.190392   96006 main.go:141] libmachine: (ha-467706)       <model type='virtio'/>
	I0417 18:49:23.190400   96006 main.go:141] libmachine: (ha-467706)     </interface>
	I0417 18:49:23.190409   96006 main.go:141] libmachine: (ha-467706)     <interface type='network'>
	I0417 18:49:23.190418   96006 main.go:141] libmachine: (ha-467706)       <source network='default'/>
	I0417 18:49:23.190423   96006 main.go:141] libmachine: (ha-467706)       <model type='virtio'/>
	I0417 18:49:23.190430   96006 main.go:141] libmachine: (ha-467706)     </interface>
	I0417 18:49:23.190435   96006 main.go:141] libmachine: (ha-467706)     <serial type='pty'>
	I0417 18:49:23.190444   96006 main.go:141] libmachine: (ha-467706)       <target port='0'/>
	I0417 18:49:23.190450   96006 main.go:141] libmachine: (ha-467706)     </serial>
	I0417 18:49:23.190455   96006 main.go:141] libmachine: (ha-467706)     <console type='pty'>
	I0417 18:49:23.190463   96006 main.go:141] libmachine: (ha-467706)       <target type='serial' port='0'/>
	I0417 18:49:23.190468   96006 main.go:141] libmachine: (ha-467706)     </console>
	I0417 18:49:23.190475   96006 main.go:141] libmachine: (ha-467706)     <rng model='virtio'>
	I0417 18:49:23.190481   96006 main.go:141] libmachine: (ha-467706)       <backend model='random'>/dev/random</backend>
	I0417 18:49:23.190488   96006 main.go:141] libmachine: (ha-467706)     </rng>
	I0417 18:49:23.190493   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190503   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190512   96006 main.go:141] libmachine: (ha-467706)   </devices>
	I0417 18:49:23.190527   96006 main.go:141] libmachine: (ha-467706) </domain>
	I0417 18:49:23.190535   96006 main.go:141] libmachine: (ha-467706) 
	I0417 18:49:23.194869   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:02:89:fa in network default
	I0417 18:49:23.195525   96006 main.go:141] libmachine: (ha-467706) Ensuring networks are active...
	I0417 18:49:23.195544   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:23.196256   96006 main.go:141] libmachine: (ha-467706) Ensuring network default is active
	I0417 18:49:23.196508   96006 main.go:141] libmachine: (ha-467706) Ensuring network mk-ha-467706 is active
	I0417 18:49:23.196986   96006 main.go:141] libmachine: (ha-467706) Getting domain xml...
	I0417 18:49:23.197668   96006 main.go:141] libmachine: (ha-467706) Creating domain...
	I0417 18:49:24.377122   96006 main.go:141] libmachine: (ha-467706) Waiting to get IP...
	I0417 18:49:24.378185   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.378610   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.378673   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.378570   96029 retry.go:31] will retry after 187.650817ms: waiting for machine to come up
	I0417 18:49:24.568112   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.568585   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.568610   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.568519   96029 retry.go:31] will retry after 263.565831ms: waiting for machine to come up
	I0417 18:49:24.834051   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.834456   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.834493   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.834423   96029 retry.go:31] will retry after 431.588458ms: waiting for machine to come up
	I0417 18:49:25.268032   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:25.268496   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:25.268516   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:25.268462   96029 retry.go:31] will retry after 586.894254ms: waiting for machine to come up
	I0417 18:49:25.857433   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:25.857951   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:25.857983   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:25.857879   96029 retry.go:31] will retry after 478.597863ms: waiting for machine to come up
	I0417 18:49:26.337567   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:26.337946   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:26.337971   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:26.337906   96029 retry.go:31] will retry after 722.019817ms: waiting for machine to come up
	I0417 18:49:27.061866   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:27.062146   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:27.062192   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:27.062092   96029 retry.go:31] will retry after 901.648194ms: waiting for machine to come up
	I0417 18:49:27.965748   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:27.966079   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:27.966102   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:27.966048   96029 retry.go:31] will retry after 954.18526ms: waiting for machine to come up
	I0417 18:49:28.921955   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:28.922298   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:28.922330   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:28.922239   96029 retry.go:31] will retry after 1.478334758s: waiting for machine to come up
	I0417 18:49:30.401822   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:30.402348   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:30.402377   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:30.402273   96029 retry.go:31] will retry after 2.24649483s: waiting for machine to come up
	I0417 18:49:32.651659   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:32.652032   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:32.652060   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:32.651979   96029 retry.go:31] will retry after 2.647468116s: waiting for machine to come up
	I0417 18:49:35.302402   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:35.302798   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:35.302829   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:35.302749   96029 retry.go:31] will retry after 2.438483753s: waiting for machine to come up
	I0417 18:49:37.743278   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:37.743704   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:37.743739   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:37.743648   96029 retry.go:31] will retry after 3.206013787s: waiting for machine to come up
	I0417 18:49:40.953078   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:40.953481   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:40.953510   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:40.953465   96029 retry.go:31] will retry after 4.754103915s: waiting for machine to come up
	I0417 18:49:45.711373   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.711801   96006 main.go:141] libmachine: (ha-467706) Found IP for machine: 192.168.39.159
	I0417 18:49:45.711825   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has current primary IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.711832   96006 main.go:141] libmachine: (ha-467706) Reserving static IP address...
	I0417 18:49:45.712231   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find host DHCP lease matching {name: "ha-467706", mac: "52:54:00:3b:c1:55", ip: "192.168.39.159"} in network mk-ha-467706
	I0417 18:49:45.790706   96006 main.go:141] libmachine: (ha-467706) Reserved static IP address: 192.168.39.159
	I0417 18:49:45.790759   96006 main.go:141] libmachine: (ha-467706) Waiting for SSH to be available...
	I0417 18:49:45.790772   96006 main.go:141] libmachine: (ha-467706) DBG | Getting to WaitForSSH function...
	I0417 18:49:45.793775   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.794200   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:45.794236   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.794395   96006 main.go:141] libmachine: (ha-467706) DBG | Using SSH client type: external
	I0417 18:49:45.794422   96006 main.go:141] libmachine: (ha-467706) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa (-rw-------)
	I0417 18:49:45.794455   96006 main.go:141] libmachine: (ha-467706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:49:45.794471   96006 main.go:141] libmachine: (ha-467706) DBG | About to run SSH command:
	I0417 18:49:45.794484   96006 main.go:141] libmachine: (ha-467706) DBG | exit 0
	I0417 18:49:45.917151   96006 main.go:141] libmachine: (ha-467706) DBG | SSH cmd err, output: <nil>: 
	I0417 18:49:45.917423   96006 main.go:141] libmachine: (ha-467706) KVM machine creation complete!
	I0417 18:49:45.917783   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:45.918347   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:45.918561   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:45.918782   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:49:45.918799   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:49:45.920179   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:49:45.920196   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:49:45.920204   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:49:45.920218   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:45.922787   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.923202   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:45.923230   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.923388   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:45.923626   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:45.923791   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:45.923975   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:45.924120   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:45.924416   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:45.924434   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:49:46.020232   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:49:46.020256   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:49:46.020267   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.023222   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.023614   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.023642   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.023856   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.024087   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.024295   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.024474   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.024674   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.024895   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.024909   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:49:46.122008   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:49:46.122077   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:49:46.122084   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:49:46.122093   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.122346   96006 buildroot.go:166] provisioning hostname "ha-467706"
	I0417 18:49:46.122366   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.122583   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.125229   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.125668   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.125696   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.125858   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.126051   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.126223   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.126403   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.126575   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.126745   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.126758   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706 && echo "ha-467706" | sudo tee /etc/hostname
	I0417 18:49:46.241359   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:49:46.241388   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.244126   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.244567   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.244610   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.244867   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.245076   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.245277   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.245428   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.245620   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.245841   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.245860   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:49:46.356754   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:49:46.356807   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:49:46.356900   96006 buildroot.go:174] setting up certificates
	I0417 18:49:46.356915   96006 provision.go:84] configureAuth start
	I0417 18:49:46.356931   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.357220   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:46.359879   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.360284   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.360311   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.360453   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.362942   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.363300   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.363331   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.363469   96006 provision.go:143] copyHostCerts
	I0417 18:49:46.363499   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:49:46.363558   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:49:46.363584   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:49:46.363675   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:49:46.363791   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:49:46.363813   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:49:46.363818   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:49:46.363848   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:49:46.363901   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:49:46.363917   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:49:46.363923   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:49:46.363943   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:49:46.364003   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706 san=[127.0.0.1 192.168.39.159 ha-467706 localhost minikube]
	I0417 18:49:46.547992   96006 provision.go:177] copyRemoteCerts
	I0417 18:49:46.548058   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:49:46.548085   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.550923   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.551238   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.551272   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.551446   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.551662   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.551812   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.551945   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:46.631629   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:49:46.631706   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:49:46.660510   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:49:46.660601   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0417 18:49:46.686435   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:49:46.686519   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 18:49:46.712855   96006 provision.go:87] duration metric: took 355.924441ms to configureAuth
	I0417 18:49:46.712892   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:49:46.713118   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:49:46.713214   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.715807   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.716194   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.716214   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.716475   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.716658   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.716844   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.717015   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.717222   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.717455   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.717479   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:49:46.986253   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:49:46.986294   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:49:46.986306   96006 main.go:141] libmachine: (ha-467706) Calling .GetURL
	I0417 18:49:46.987867   96006 main.go:141] libmachine: (ha-467706) DBG | Using libvirt version 6000000
	I0417 18:49:46.990607   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.991025   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.991057   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.991208   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:49:46.991225   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:49:46.991235   96006 client.go:171] duration metric: took 24.274758262s to LocalClient.Create
	I0417 18:49:46.991267   96006 start.go:167] duration metric: took 24.274825568s to libmachine.API.Create "ha-467706"
	I0417 18:49:46.991278   96006 start.go:293] postStartSetup for "ha-467706" (driver="kvm2")
	I0417 18:49:46.991298   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:49:46.991317   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:46.991606   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:49:46.991639   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.994027   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.994408   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.994434   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.994582   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.994815   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.994988   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.995160   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.075556   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:49:47.080124   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:49:47.080157   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:49:47.080240   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:49:47.080378   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:49:47.080395   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:49:47.080509   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:49:47.090584   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:49:47.116825   96006 start.go:296] duration metric: took 125.527222ms for postStartSetup
	I0417 18:49:47.116884   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:47.117502   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:47.120183   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.120522   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.120553   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.120862   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:49:47.121043   96006 start.go:128] duration metric: took 24.424966199s to createHost
	I0417 18:49:47.121067   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.123333   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.123641   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.123669   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.123762   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.123939   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.124163   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.124296   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.124473   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:47.124691   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:47.124711   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:49:47.226058   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379787.196532675
	
	I0417 18:49:47.226085   96006 fix.go:216] guest clock: 1713379787.196532675
	I0417 18:49:47.226096   96006 fix.go:229] Guest: 2024-04-17 18:49:47.196532675 +0000 UTC Remote: 2024-04-17 18:49:47.12105477 +0000 UTC m=+24.548401797 (delta=75.477905ms)
	I0417 18:49:47.226122   96006 fix.go:200] guest clock delta is within tolerance: 75.477905ms
	I0417 18:49:47.226132   96006 start.go:83] releasing machines lock for "ha-467706", held for 24.530164s
	I0417 18:49:47.226159   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.226466   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:47.229254   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.229610   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.229641   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.229826   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230365   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230563   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230659   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:49:47.230708   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.230824   96006 ssh_runner.go:195] Run: cat /version.json
	I0417 18:49:47.230857   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.233295   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.233738   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.233769   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.233789   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.234080   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.234254   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.234319   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.234371   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.234459   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.234461   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.234625   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.234712   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.234758   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.234859   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.332201   96006 ssh_runner.go:195] Run: systemctl --version
	I0417 18:49:47.338467   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:49:47.503987   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:49:47.510271   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:49:47.510357   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:49:47.526939   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:49:47.526969   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:49:47.527048   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:49:47.544808   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:49:47.560276   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:49:47.560342   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:49:47.575493   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:49:47.590305   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:49:47.703106   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:49:47.847924   96006 docker.go:233] disabling docker service ...
	I0417 18:49:47.848005   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:49:47.863461   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:49:47.877676   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:49:48.022562   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:49:48.151007   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:49:48.166077   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:49:48.186073   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:49:48.186142   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.197296   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:49:48.197367   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.209670   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.221372   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.233508   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:49:48.245432   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.257212   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.276089   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.288509   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:49:48.299360   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:49:48.299433   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:49:48.313454   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:49:48.324328   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:49:48.450531   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:49:48.594212   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:49:48.594298   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:49:48.599281   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:49:48.599344   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:49:48.603340   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:49:48.642689   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:49:48.642799   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:49:48.671577   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:49:48.702661   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:49:48.704026   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:48.706752   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:48.707106   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:48.707135   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:48.707492   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:49:48.712013   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:49:48.726914   96006 kubeadm.go:877] updating cluster {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc
.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 18:49:48.727027   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:49:48.727072   96006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 18:49:48.763371   96006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0417 18:49:48.763447   96006 ssh_runner.go:195] Run: which lz4
	I0417 18:49:48.767730   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0417 18:49:48.767859   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0417 18:49:48.772298   96006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0417 18:49:48.772335   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394547972 bytes)
	I0417 18:49:50.300197   96006 crio.go:462] duration metric: took 1.532382815s to copy over tarball
	I0417 18:49:50.300268   96006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0417 18:49:52.542643   96006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.242348458s)
	I0417 18:49:52.542670   96006 crio.go:469] duration metric: took 2.242444327s to extract the tarball
	I0417 18:49:52.542679   96006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 18:49:52.580669   96006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 18:49:52.627109   96006 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 18:49:52.627137   96006 cache_images.go:84] Images are preloaded, skipping loading
	I0417 18:49:52.627146   96006 kubeadm.go:928] updating node { 192.168.39.159 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:49:52.627259   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:49:52.627324   96006 ssh_runner.go:195] Run: crio config
	I0417 18:49:52.672604   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:49:52.672627   96006 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0417 18:49:52.672640   96006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 18:49:52.672667   96006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-467706 NodeName:ha-467706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 18:49:52.672846   96006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-467706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 18:49:52.672877   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:49:52.672919   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:49:52.690262   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:49:52.690373   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:49:52.690446   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:49:52.701509   96006 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 18:49:52.701583   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0417 18:49:52.711831   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0417 18:49:52.729846   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:49:52.747405   96006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0417 18:49:52.764754   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0417 18:49:52.782535   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:49:52.786592   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:49:52.799666   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:49:52.916690   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:49:52.934217   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.159
	I0417 18:49:52.934251   96006 certs.go:194] generating shared ca certs ...
	I0417 18:49:52.934274   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:52.934431   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:49:52.934472   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:49:52.934483   96006 certs.go:256] generating profile certs ...
	I0417 18:49:52.934530   96006 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:49:52.934544   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt with IP's: []
	I0417 18:49:53.244202   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt ...
	I0417 18:49:53.244236   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt: {Name:mk260beaef924a663e20e604d910222418991c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.244413   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key ...
	I0417 18:49:53.244425   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key: {Name:mka6e83ab53b9a78e8580ba26a408c6fe0aa4108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.244500   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce
	I0417 18:49:53.244516   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.254]
	I0417 18:49:53.351159   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce ...
	I0417 18:49:53.351195   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce: {Name:mk0d007a3059514906a38e3c48ad705a629ef9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.351349   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce ...
	I0417 18:49:53.351371   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce: {Name:mkb0a296faba2ad3992fca03f9ce3ee187f67de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.351441   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:49:53.351513   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:49:53.351566   96006 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:49:53.351580   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt with IP's: []
	I0417 18:49:53.481387   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt ...
	I0417 18:49:53.481422   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt: {Name:mk08465b193680de7d272e691e702536866d5179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.481583   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key ...
	I0417 18:49:53.481594   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key: {Name:mkb793bc73b35b3c9b394526c75bb288dee06af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.481657   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:49:53.481675   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:49:53.481685   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:49:53.481696   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:49:53.481714   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:49:53.481727   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:49:53.481739   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:49:53.481760   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:49:53.481808   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:49:53.481843   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:49:53.481853   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:49:53.481877   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:49:53.481899   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:49:53.481925   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:49:53.481960   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:49:53.481987   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.482001   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.482013   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.482612   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:49:53.509306   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:49:53.537445   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:49:53.565313   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:49:53.592648   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0417 18:49:53.619996   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 18:49:53.649884   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:49:53.679489   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:49:53.718882   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:49:53.756459   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:49:53.791680   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:49:53.818386   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 18:49:53.836408   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:49:53.842201   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:49:53.854077   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.858810   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.858878   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.864686   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:49:53.876271   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:49:53.888608   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.893582   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.893662   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.899643   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:49:53.912156   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:49:53.924518   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.929666   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.929743   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.935925   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:49:53.948824   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:49:53.953491   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:49:53.953551   96006 kubeadm.go:391] StartCluster: {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2
ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:49:53.953647   96006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 18:49:53.953694   96006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 18:49:53.994421   96006 cri.go:89] found id: ""
	I0417 18:49:53.994501   96006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 18:49:54.006024   96006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 18:49:54.017184   96006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 18:49:54.027791   96006 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 18:49:54.027814   96006 kubeadm.go:156] found existing configuration files:
	
	I0417 18:49:54.027869   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 18:49:54.037957   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 18:49:54.038021   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 18:49:54.048431   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 18:49:54.058368   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 18:49:54.058435   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 18:49:54.069360   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 18:49:54.080326   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 18:49:54.080383   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 18:49:54.091346   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 18:49:54.101536   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 18:49:54.101611   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 18:49:54.112092   96006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 18:49:54.221173   96006 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 18:49:54.221276   96006 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 18:49:54.345642   96006 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 18:49:54.345788   96006 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 18:49:54.345991   96006 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 18:49:54.586516   96006 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 18:49:54.750772   96006 out.go:204]   - Generating certificates and keys ...
	I0417 18:49:54.750922   96006 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 18:49:54.751062   96006 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 18:49:54.751194   96006 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 18:49:54.903367   96006 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 18:49:54.954630   96006 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 18:49:55.127672   96006 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 18:49:55.391718   96006 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 18:49:55.391885   96006 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-467706 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0417 18:49:55.502336   96006 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 18:49:55.502504   96006 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-467706 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0417 18:49:55.698045   96006 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 18:49:55.867661   96006 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 18:49:56.226068   96006 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 18:49:56.226276   96006 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 18:49:56.336532   96006 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 18:49:56.482956   96006 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 18:49:56.554416   96006 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 18:49:56.815239   96006 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 18:49:56.902120   96006 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 18:49:56.902695   96006 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 18:49:56.906066   96006 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 18:49:56.908270   96006 out.go:204]   - Booting up control plane ...
	I0417 18:49:56.908379   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 18:49:56.908500   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 18:49:56.908588   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 18:49:56.925089   96006 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 18:49:56.925875   96006 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 18:49:56.925950   96006 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 18:49:57.077441   96006 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 18:49:57.077590   96006 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 18:49:57.579007   96006 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.961199ms
	I0417 18:49:57.579109   96006 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 18:50:03.508397   96006 kubeadm.go:309] [api-check] The API server is healthy after 5.933545161s
	I0417 18:50:03.524830   96006 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 18:50:03.541796   96006 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 18:50:03.571323   96006 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 18:50:03.571530   96006 kubeadm.go:309] [mark-control-plane] Marking the node ha-467706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 18:50:03.583854   96006 kubeadm.go:309] [bootstrap-token] Using token: hjpiw8.6t8szkhj41h84dis
	I0417 18:50:03.585429   96006 out.go:204]   - Configuring RBAC rules ...
	I0417 18:50:03.585594   96006 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 18:50:03.600099   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 18:50:03.615453   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 18:50:03.619726   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 18:50:03.624618   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 18:50:03.628356   96006 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 18:50:03.917165   96006 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 18:50:04.368237   96006 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 18:50:04.916872   96006 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 18:50:04.918021   96006 kubeadm.go:309] 
	I0417 18:50:04.918113   96006 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 18:50:04.918127   96006 kubeadm.go:309] 
	I0417 18:50:04.918209   96006 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 18:50:04.918220   96006 kubeadm.go:309] 
	I0417 18:50:04.918272   96006 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 18:50:04.918340   96006 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 18:50:04.918418   96006 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 18:50:04.918429   96006 kubeadm.go:309] 
	I0417 18:50:04.918492   96006 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 18:50:04.918511   96006 kubeadm.go:309] 
	I0417 18:50:04.918573   96006 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 18:50:04.918588   96006 kubeadm.go:309] 
	I0417 18:50:04.918691   96006 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 18:50:04.918802   96006 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 18:50:04.918901   96006 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 18:50:04.918912   96006 kubeadm.go:309] 
	I0417 18:50:04.919032   96006 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 18:50:04.919142   96006 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 18:50:04.919153   96006 kubeadm.go:309] 
	I0417 18:50:04.919262   96006 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hjpiw8.6t8szkhj41h84dis \
	I0417 18:50:04.919431   96006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 \
	I0417 18:50:04.919479   96006 kubeadm.go:309] 	--control-plane 
	I0417 18:50:04.919489   96006 kubeadm.go:309] 
	I0417 18:50:04.919599   96006 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 18:50:04.919614   96006 kubeadm.go:309] 
	I0417 18:50:04.919715   96006 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hjpiw8.6t8szkhj41h84dis \
	I0417 18:50:04.919854   96006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 
	I0417 18:50:04.920711   96006 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 18:50:04.920755   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:50:04.920765   96006 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0417 18:50:04.922745   96006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0417 18:50:04.924119   96006 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0417 18:50:04.932915   96006 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl ...
	I0417 18:50:04.932940   96006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0417 18:50:04.958687   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0417 18:50:05.317320   96006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 18:50:05.317440   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:05.317469   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706 minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=true
	I0417 18:50:05.374278   96006 ops.go:34] apiserver oom_adj: -16
	I0417 18:50:05.507845   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:06.008879   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:06.508139   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:07.008349   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:07.507966   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:08.008041   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:08.508561   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:09.008903   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:09.508146   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:10.007855   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:10.508919   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:11.008695   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:11.508789   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:12.008689   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:12.508392   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:13.008145   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:13.508071   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:14.008218   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:14.508449   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:15.008880   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:15.508603   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:16.008586   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:16.508154   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:17.008458   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:17.152345   96006 kubeadm.go:1107] duration metric: took 11.83496435s to wait for elevateKubeSystemPrivileges
	W0417 18:50:17.152395   96006 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0417 18:50:17.152405   96006 kubeadm.go:393] duration metric: took 23.198862653s to StartCluster
	I0417 18:50:17.152426   96006 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:17.152501   96006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:50:17.153264   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:17.153473   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0417 18:50:17.153481   96006 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:17.153503   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:50:17.153521   96006 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0417 18:50:17.153623   96006 addons.go:69] Setting storage-provisioner=true in profile "ha-467706"
	I0417 18:50:17.153655   96006 addons.go:234] Setting addon storage-provisioner=true in "ha-467706"
	I0417 18:50:17.153681   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:17.153624   96006 addons.go:69] Setting default-storageclass=true in profile "ha-467706"
	I0417 18:50:17.153746   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:17.153771   96006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-467706"
	I0417 18:50:17.154085   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.154124   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.154162   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.154197   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.169960   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0417 18:50:17.170015   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41635
	I0417 18:50:17.170413   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.170442   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.170930   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.170951   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.171058   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.171079   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.171343   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.171539   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.171702   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.171975   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.172010   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.174031   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:50:17.174414   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0417 18:50:17.175002   96006 cert_rotation.go:137] Starting client certificate rotation controller
	I0417 18:50:17.175289   96006 addons.go:234] Setting addon default-storageclass=true in "ha-467706"
	I0417 18:50:17.175344   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:17.175731   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.175781   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.187657   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0417 18:50:17.188199   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.188829   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.188857   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.189272   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.189618   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.190208   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0417 18:50:17.190573   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.191100   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.191124   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.191523   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.191659   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:17.193826   96006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 18:50:17.192154   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.195104   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.195281   96006 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:50:17.195305   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 18:50:17.195330   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:17.198235   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.198524   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:17.198546   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.198731   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:17.198949   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:17.199120   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:17.199263   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:17.210463   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0417 18:50:17.210875   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.211399   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.211424   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.211846   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.212069   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.213887   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:17.214200   96006 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 18:50:17.214223   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 18:50:17.214245   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:17.216977   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.217385   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:17.217430   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.217612   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:17.217792   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:17.217935   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:17.218076   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:17.317822   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0417 18:50:17.367713   96006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:50:17.380094   96006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 18:50:17.756247   96006 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0417 18:50:18.075807   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.075832   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.075894   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.075922   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076153   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076172   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076183   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.076191   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076270   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076290   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076295   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076315   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.076329   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076399   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076410   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076440   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076507   96006 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0417 18:50:18.076528   96006 round_trippers.go:469] Request Headers:
	I0417 18:50:18.076539   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:50:18.076543   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:50:18.076684   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076729   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076748   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.090116   96006 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0417 18:50:18.091273   96006 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0417 18:50:18.091296   96006 round_trippers.go:469] Request Headers:
	I0417 18:50:18.091315   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:50:18.091320   96006 round_trippers.go:473]     Content-Type: application/json
	I0417 18:50:18.091328   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:50:18.094312   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:50:18.094536   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.094557   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.094830   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.094851   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.094873   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.096565   96006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0417 18:50:18.098207   96006 addons.go:505] duration metric: took 944.687777ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0417 18:50:18.098255   96006 start.go:245] waiting for cluster config update ...
	I0417 18:50:18.098273   96006 start.go:254] writing updated cluster config ...
	I0417 18:50:18.099861   96006 out.go:177] 
	I0417 18:50:18.101568   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:18.101676   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:18.103543   96006 out.go:177] * Starting "ha-467706-m02" control-plane node in "ha-467706" cluster
	I0417 18:50:18.104867   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:50:18.104900   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:50:18.105008   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:50:18.105026   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:50:18.105143   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:18.105367   96006 start.go:360] acquireMachinesLock for ha-467706-m02: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:50:18.105445   96006 start.go:364] duration metric: took 47.307µs to acquireMachinesLock for "ha-467706-m02"
	I0417 18:50:18.105473   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:18.105578   96006 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0417 18:50:18.107330   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:50:18.107431   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:18.107460   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:18.122242   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42417
	I0417 18:50:18.122702   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:18.123187   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:18.123210   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:18.123536   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:18.123731   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:18.123878   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:18.124060   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:50:18.124089   96006 client.go:168] LocalClient.Create starting
	I0417 18:50:18.124126   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:50:18.124174   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:50:18.124193   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:50:18.124264   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:50:18.124291   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:50:18.124571   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:50:18.124658   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:50:18.124674   96006 main.go:141] libmachine: (ha-467706-m02) Calling .PreCreateCheck
	I0417 18:50:18.125072   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:18.126362   96006 main.go:141] libmachine: Creating machine...
	I0417 18:50:18.126386   96006 main.go:141] libmachine: (ha-467706-m02) Calling .Create
	I0417 18:50:18.126923   96006 main.go:141] libmachine: (ha-467706-m02) Creating KVM machine...
	I0417 18:50:18.128081   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found existing default KVM network
	I0417 18:50:18.128226   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found existing private KVM network mk-ha-467706
	I0417 18:50:18.128361   96006 main.go:141] libmachine: (ha-467706-m02) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 ...
	I0417 18:50:18.128389   96006 main.go:141] libmachine: (ha-467706-m02) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:50:18.128482   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.128356   96351 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:50:18.128617   96006 main.go:141] libmachine: (ha-467706-m02) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:50:18.367880   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.367714   96351 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa...
	I0417 18:50:18.531251   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.531111   96351 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/ha-467706-m02.rawdisk...
	I0417 18:50:18.531282   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Writing magic tar header
	I0417 18:50:18.531293   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Writing SSH key tar header
	I0417 18:50:18.531301   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.531231   96351 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 ...
	I0417 18:50:18.531380   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02
	I0417 18:50:18.531401   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:50:18.531413   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 (perms=drwx------)
	I0417 18:50:18.531434   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:50:18.531449   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:50:18.531462   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:50:18.531475   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:50:18.531484   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:50:18.531493   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:50:18.531501   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:50:18.531510   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:50:18.531524   96006 main.go:141] libmachine: (ha-467706-m02) Creating domain...
	I0417 18:50:18.531537   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:50:18.531549   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home
	I0417 18:50:18.531561   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Skipping /home - not owner
	I0417 18:50:18.532389   96006 main.go:141] libmachine: (ha-467706-m02) define libvirt domain using xml: 
	I0417 18:50:18.532412   96006 main.go:141] libmachine: (ha-467706-m02) <domain type='kvm'>
	I0417 18:50:18.532440   96006 main.go:141] libmachine: (ha-467706-m02)   <name>ha-467706-m02</name>
	I0417 18:50:18.532464   96006 main.go:141] libmachine: (ha-467706-m02)   <memory unit='MiB'>2200</memory>
	I0417 18:50:18.532474   96006 main.go:141] libmachine: (ha-467706-m02)   <vcpu>2</vcpu>
	I0417 18:50:18.532484   96006 main.go:141] libmachine: (ha-467706-m02)   <features>
	I0417 18:50:18.532495   96006 main.go:141] libmachine: (ha-467706-m02)     <acpi/>
	I0417 18:50:18.532508   96006 main.go:141] libmachine: (ha-467706-m02)     <apic/>
	I0417 18:50:18.532574   96006 main.go:141] libmachine: (ha-467706-m02)     <pae/>
	I0417 18:50:18.532610   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.532633   96006 main.go:141] libmachine: (ha-467706-m02)   </features>
	I0417 18:50:18.532649   96006 main.go:141] libmachine: (ha-467706-m02)   <cpu mode='host-passthrough'>
	I0417 18:50:18.532666   96006 main.go:141] libmachine: (ha-467706-m02)   
	I0417 18:50:18.532678   96006 main.go:141] libmachine: (ha-467706-m02)   </cpu>
	I0417 18:50:18.532690   96006 main.go:141] libmachine: (ha-467706-m02)   <os>
	I0417 18:50:18.532702   96006 main.go:141] libmachine: (ha-467706-m02)     <type>hvm</type>
	I0417 18:50:18.532716   96006 main.go:141] libmachine: (ha-467706-m02)     <boot dev='cdrom'/>
	I0417 18:50:18.532738   96006 main.go:141] libmachine: (ha-467706-m02)     <boot dev='hd'/>
	I0417 18:50:18.532751   96006 main.go:141] libmachine: (ha-467706-m02)     <bootmenu enable='no'/>
	I0417 18:50:18.532766   96006 main.go:141] libmachine: (ha-467706-m02)   </os>
	I0417 18:50:18.532791   96006 main.go:141] libmachine: (ha-467706-m02)   <devices>
	I0417 18:50:18.532809   96006 main.go:141] libmachine: (ha-467706-m02)     <disk type='file' device='cdrom'>
	I0417 18:50:18.532826   96006 main.go:141] libmachine: (ha-467706-m02)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/boot2docker.iso'/>
	I0417 18:50:18.532838   96006 main.go:141] libmachine: (ha-467706-m02)       <target dev='hdc' bus='scsi'/>
	I0417 18:50:18.532850   96006 main.go:141] libmachine: (ha-467706-m02)       <readonly/>
	I0417 18:50:18.532869   96006 main.go:141] libmachine: (ha-467706-m02)     </disk>
	I0417 18:50:18.532879   96006 main.go:141] libmachine: (ha-467706-m02)     <disk type='file' device='disk'>
	I0417 18:50:18.532897   96006 main.go:141] libmachine: (ha-467706-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:50:18.532914   96006 main.go:141] libmachine: (ha-467706-m02)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/ha-467706-m02.rawdisk'/>
	I0417 18:50:18.532926   96006 main.go:141] libmachine: (ha-467706-m02)       <target dev='hda' bus='virtio'/>
	I0417 18:50:18.532938   96006 main.go:141] libmachine: (ha-467706-m02)     </disk>
	I0417 18:50:18.532949   96006 main.go:141] libmachine: (ha-467706-m02)     <interface type='network'>
	I0417 18:50:18.532961   96006 main.go:141] libmachine: (ha-467706-m02)       <source network='mk-ha-467706'/>
	I0417 18:50:18.532975   96006 main.go:141] libmachine: (ha-467706-m02)       <model type='virtio'/>
	I0417 18:50:18.532994   96006 main.go:141] libmachine: (ha-467706-m02)     </interface>
	I0417 18:50:18.533013   96006 main.go:141] libmachine: (ha-467706-m02)     <interface type='network'>
	I0417 18:50:18.533026   96006 main.go:141] libmachine: (ha-467706-m02)       <source network='default'/>
	I0417 18:50:18.533043   96006 main.go:141] libmachine: (ha-467706-m02)       <model type='virtio'/>
	I0417 18:50:18.533056   96006 main.go:141] libmachine: (ha-467706-m02)     </interface>
	I0417 18:50:18.533067   96006 main.go:141] libmachine: (ha-467706-m02)     <serial type='pty'>
	I0417 18:50:18.533079   96006 main.go:141] libmachine: (ha-467706-m02)       <target port='0'/>
	I0417 18:50:18.533089   96006 main.go:141] libmachine: (ha-467706-m02)     </serial>
	I0417 18:50:18.533105   96006 main.go:141] libmachine: (ha-467706-m02)     <console type='pty'>
	I0417 18:50:18.533116   96006 main.go:141] libmachine: (ha-467706-m02)       <target type='serial' port='0'/>
	I0417 18:50:18.533138   96006 main.go:141] libmachine: (ha-467706-m02)     </console>
	I0417 18:50:18.533161   96006 main.go:141] libmachine: (ha-467706-m02)     <rng model='virtio'>
	I0417 18:50:18.533179   96006 main.go:141] libmachine: (ha-467706-m02)       <backend model='random'>/dev/random</backend>
	I0417 18:50:18.533191   96006 main.go:141] libmachine: (ha-467706-m02)     </rng>
	I0417 18:50:18.533202   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.533212   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.533224   96006 main.go:141] libmachine: (ha-467706-m02)   </devices>
	I0417 18:50:18.533238   96006 main.go:141] libmachine: (ha-467706-m02) </domain>
	I0417 18:50:18.533255   96006 main.go:141] libmachine: (ha-467706-m02) 
	I0417 18:50:18.540094   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:22:42:74 in network default
	I0417 18:50:18.540741   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring networks are active...
	I0417 18:50:18.540791   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:18.541484   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring network default is active
	I0417 18:50:18.541781   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring network mk-ha-467706 is active
	I0417 18:50:18.542114   96006 main.go:141] libmachine: (ha-467706-m02) Getting domain xml...
	I0417 18:50:18.542779   96006 main.go:141] libmachine: (ha-467706-m02) Creating domain...
	I0417 18:50:19.758274   96006 main.go:141] libmachine: (ha-467706-m02) Waiting to get IP...
	I0417 18:50:19.759295   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:19.759776   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:19.759831   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:19.759753   96351 retry.go:31] will retry after 297.823603ms: waiting for machine to come up
	I0417 18:50:20.059605   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.060162   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.060192   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.060099   96351 retry.go:31] will retry after 378.130105ms: waiting for machine to come up
	I0417 18:50:20.439850   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.440396   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.440423   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.440349   96351 retry.go:31] will retry after 309.850338ms: waiting for machine to come up
	I0417 18:50:20.751969   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.752504   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.752529   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.752447   96351 retry.go:31] will retry after 484.021081ms: waiting for machine to come up
	I0417 18:50:21.238166   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:21.238627   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:21.238653   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:21.238571   96351 retry.go:31] will retry after 723.470091ms: waiting for machine to come up
	I0417 18:50:21.963754   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:21.964274   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:21.964316   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:21.964185   96351 retry.go:31] will retry after 820.645081ms: waiting for machine to come up
	I0417 18:50:22.786393   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:22.786856   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:22.786885   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:22.786809   96351 retry.go:31] will retry after 997.774765ms: waiting for machine to come up
	I0417 18:50:23.786284   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:23.786664   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:23.786685   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:23.786639   96351 retry.go:31] will retry after 1.38947065s: waiting for machine to come up
	I0417 18:50:25.177959   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:25.178412   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:25.178445   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:25.178346   96351 retry.go:31] will retry after 1.352777892s: waiting for machine to come up
	I0417 18:50:26.532959   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:26.533453   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:26.533485   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:26.533400   96351 retry.go:31] will retry after 2.218994741s: waiting for machine to come up
	I0417 18:50:28.754002   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:28.754519   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:28.754555   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:28.754456   96351 retry.go:31] will retry after 1.815056829s: waiting for machine to come up
	I0417 18:50:30.572601   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:30.573175   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:30.573208   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:30.573099   96351 retry.go:31] will retry after 2.735191697s: waiting for machine to come up
	I0417 18:50:33.309522   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:33.309997   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:33.310028   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:33.309953   96351 retry.go:31] will retry after 3.13218678s: waiting for machine to come up
	I0417 18:50:36.446318   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:36.446793   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:36.446824   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:36.446734   96351 retry.go:31] will retry after 5.302006713s: waiting for machine to come up
	I0417 18:50:41.753177   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.753633   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has current primary IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.753665   96006 main.go:141] libmachine: (ha-467706-m02) Found IP for machine: 192.168.39.236
	I0417 18:50:41.753703   96006 main.go:141] libmachine: (ha-467706-m02) Reserving static IP address...
	I0417 18:50:41.754045   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find host DHCP lease matching {name: "ha-467706-m02", mac: "52:54:00:d8:50:50", ip: "192.168.39.236"} in network mk-ha-467706
	I0417 18:50:41.829586   96006 main.go:141] libmachine: (ha-467706-m02) Reserved static IP address: 192.168.39.236
	I0417 18:50:41.829617   96006 main.go:141] libmachine: (ha-467706-m02) Waiting for SSH to be available...
	I0417 18:50:41.829627   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Getting to WaitForSSH function...
	I0417 18:50:41.832895   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.833340   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:41.833363   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.833541   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using SSH client type: external
	I0417 18:50:41.833571   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa (-rw-------)
	I0417 18:50:41.833601   96006 main.go:141] libmachine: (ha-467706-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:50:41.833614   96006 main.go:141] libmachine: (ha-467706-m02) DBG | About to run SSH command:
	I0417 18:50:41.833630   96006 main.go:141] libmachine: (ha-467706-m02) DBG | exit 0
	I0417 18:50:41.960706   96006 main.go:141] libmachine: (ha-467706-m02) DBG | SSH cmd err, output: <nil>: 
	I0417 18:50:41.960994   96006 main.go:141] libmachine: (ha-467706-m02) KVM machine creation complete!
	I0417 18:50:41.961334   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:41.961844   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:41.962061   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:41.962225   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:50:41.962238   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:50:41.963436   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:50:41.963451   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:50:41.963456   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:50:41.963463   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:41.965691   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.965995   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:41.966028   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.966103   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:41.966297   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:41.966455   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:41.966606   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:41.966798   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:41.966995   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:41.967007   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:50:42.072210   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:50:42.072233   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:50:42.072241   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.075072   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.075435   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.075465   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.075768   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.075992   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.076161   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.076307   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.076475   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.076688   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.076702   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:50:42.185913   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:50:42.185995   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:50:42.186002   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:50:42.186011   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.186288   96006 buildroot.go:166] provisioning hostname "ha-467706-m02"
	I0417 18:50:42.186321   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.186569   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.189252   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.189677   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.189709   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.189846   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.190047   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.190213   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.190368   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.190523   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.190728   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.190744   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706-m02 && echo "ha-467706-m02" | sudo tee /etc/hostname
	I0417 18:50:42.311838   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706-m02
	
	I0417 18:50:42.311871   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.314488   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.314887   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.314914   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.315085   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.315336   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.315547   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.315701   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.315908   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.316085   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.316102   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:50:42.434077   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:50:42.434113   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:50:42.434135   96006 buildroot.go:174] setting up certificates
	I0417 18:50:42.434148   96006 provision.go:84] configureAuth start
	I0417 18:50:42.434166   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.434487   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:42.437258   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.437702   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.437734   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.437882   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.440159   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.440448   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.440490   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.440578   96006 provision.go:143] copyHostCerts
	I0417 18:50:42.440617   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:50:42.440657   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:50:42.440669   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:50:42.440748   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:50:42.440873   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:50:42.440901   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:50:42.440909   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:50:42.440952   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:50:42.441025   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:50:42.441048   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:50:42.441054   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:50:42.441088   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:50:42.441162   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706-m02 san=[127.0.0.1 192.168.39.236 ha-467706-m02 localhost minikube]
	I0417 18:50:42.760848   96006 provision.go:177] copyRemoteCerts
	I0417 18:50:42.760909   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:50:42.760938   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.763523   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.763809   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.763835   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.763992   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.764230   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.764378   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.764519   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:42.847766   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:50:42.847833   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:50:42.873757   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:50:42.873849   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 18:50:42.899584   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:50:42.899649   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:50:42.925517   96006 provision.go:87] duration metric: took 491.346719ms to configureAuth
	I0417 18:50:42.925550   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:50:42.925744   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:42.925844   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.928428   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.928795   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.928848   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.929039   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.929254   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.929439   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.929593   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.929783   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.929940   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.929955   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:50:43.215765   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:50:43.215798   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:50:43.215807   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetURL
	I0417 18:50:43.217123   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using libvirt version 6000000
	I0417 18:50:43.219595   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.220034   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.220066   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.220248   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:50:43.220262   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:50:43.220270   96006 client.go:171] duration metric: took 25.096172136s to LocalClient.Create
	I0417 18:50:43.220292   96006 start.go:167] duration metric: took 25.096248433s to libmachine.API.Create "ha-467706"
	I0417 18:50:43.220302   96006 start.go:293] postStartSetup for "ha-467706-m02" (driver="kvm2")
	I0417 18:50:43.220314   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:50:43.220347   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.220618   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:50:43.220650   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.222911   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.223248   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.223279   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.223454   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.223636   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.223808   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.223996   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.309604   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:50:43.314300   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:50:43.314329   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:50:43.314402   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:50:43.314489   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:50:43.314503   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:50:43.314613   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:50:43.324876   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:50:43.352537   96006 start.go:296] duration metric: took 132.222885ms for postStartSetup
	I0417 18:50:43.352594   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:43.353271   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:43.355939   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.356297   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.356327   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.356586   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:43.356837   96006 start.go:128] duration metric: took 25.251245741s to createHost
	I0417 18:50:43.356869   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.358910   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.359264   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.359292   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.359361   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.359556   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.359731   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.359873   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.360045   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:43.360216   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:43.360227   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:50:43.469667   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379843.419472324
	
	I0417 18:50:43.469695   96006 fix.go:216] guest clock: 1713379843.419472324
	I0417 18:50:43.469704   96006 fix.go:229] Guest: 2024-04-17 18:50:43.419472324 +0000 UTC Remote: 2024-04-17 18:50:43.356854721 +0000 UTC m=+80.784201764 (delta=62.617603ms)
	I0417 18:50:43.469725   96006 fix.go:200] guest clock delta is within tolerance: 62.617603ms
	I0417 18:50:43.469732   96006 start.go:83] releasing machines lock for "ha-467706-m02", held for 25.364268885s
	I0417 18:50:43.469750   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.470040   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:43.472586   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.473021   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.473047   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.475263   96006 out.go:177] * Found network options:
	I0417 18:50:43.476714   96006 out.go:177]   - NO_PROXY=192.168.39.159
	W0417 18:50:43.477965   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:50:43.478009   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478640   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478829   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478917   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:50:43.478957   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	W0417 18:50:43.479048   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:50:43.479126   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:50:43.479148   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.481734   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.481965   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482051   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.482106   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482197   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.482332   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.482357   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482366   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.482509   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.482559   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.482676   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.482729   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.482839   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.482978   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.723186   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:50:43.729400   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:50:43.729487   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:50:43.747191   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:50:43.747222   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:50:43.747298   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:50:43.766134   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:50:43.782049   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:50:43.782103   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:50:43.796842   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:50:43.811837   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:50:43.955183   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:50:44.119405   96006 docker.go:233] disabling docker service ...
	I0417 18:50:44.119488   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:50:44.135751   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:50:44.150314   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:50:44.289369   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:50:44.419335   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:50:44.434448   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:50:44.454265   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:50:44.454341   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.465548   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:50:44.465626   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.477204   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.488510   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.500218   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:50:44.511859   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.523479   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.544983   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.556679   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:50:44.567217   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:50:44.567280   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:50:44.581851   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:50:44.592445   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:50:44.730545   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:50:44.871219   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:50:44.871302   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:50:44.876072   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:50:44.876152   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:50:44.880114   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:50:44.920173   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:50:44.920278   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:50:44.949069   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:50:44.987468   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:50:44.988990   96006 out.go:177]   - env NO_PROXY=192.168.39.159
	I0417 18:50:44.990345   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:44.992870   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:44.993230   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:44.993253   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:44.993438   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:50:44.998037   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:50:45.011925   96006 mustload.go:65] Loading cluster: ha-467706
	I0417 18:50:45.012122   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:45.012364   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:45.012402   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:45.027180   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0417 18:50:45.027710   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:45.028266   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:45.028293   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:45.028637   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:45.029263   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:45.031035   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:45.031317   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:45.031355   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:45.046157   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35907
	I0417 18:50:45.046558   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:45.047075   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:45.047105   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:45.047444   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:45.047639   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:45.047800   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.236
	I0417 18:50:45.047815   96006 certs.go:194] generating shared ca certs ...
	I0417 18:50:45.047834   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.047966   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:50:45.048005   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:50:45.048014   96006 certs.go:256] generating profile certs ...
	I0417 18:50:45.048106   96006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:50:45.048131   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3
	I0417 18:50:45.048146   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.254]
	I0417 18:50:45.216050   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 ...
	I0417 18:50:45.216082   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3: {Name:mke40eb0bbfa4a257d69dee7c0db8615a28a2c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.216247   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3 ...
	I0417 18:50:45.216267   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3: {Name:mk744f487b85aae0492308fe90f1def1e1057446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.216334   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:50:45.216463   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:50:45.216592   96006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:50:45.216609   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:50:45.216622   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:50:45.216636   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:50:45.216647   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:50:45.216658   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:50:45.216668   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:50:45.216682   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:50:45.216694   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:50:45.216742   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:50:45.216789   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:50:45.216800   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:50:45.216830   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:50:45.216853   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:50:45.216875   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:50:45.216912   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:50:45.216937   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.216951   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.216966   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.217028   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:45.220190   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:45.220566   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:45.220597   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:45.220791   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:45.221049   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:45.221236   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:45.221414   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:45.293203   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0417 18:50:45.298477   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0417 18:50:45.313225   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0417 18:50:45.318150   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0417 18:50:45.332489   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0417 18:50:45.340783   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0417 18:50:45.351476   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0417 18:50:45.355935   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0417 18:50:45.368043   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0417 18:50:45.372901   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0417 18:50:45.385103   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0417 18:50:45.390349   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0417 18:50:45.403053   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:50:45.428841   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:50:45.453137   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:50:45.478112   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:50:45.503276   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0417 18:50:45.528841   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 18:50:45.554403   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:50:45.581253   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:50:45.609231   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:50:45.635402   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:50:45.661160   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:50:45.687529   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0417 18:50:45.704763   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0417 18:50:45.721300   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0417 18:50:45.738377   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0417 18:50:45.755487   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0417 18:50:45.772301   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0417 18:50:45.789375   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0417 18:50:45.807294   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:50:45.813499   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:50:45.825040   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.830049   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.830110   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.836295   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:50:45.848368   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:50:45.859950   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.865632   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.865700   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.871771   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:50:45.883399   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:50:45.894718   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.900008   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.900078   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.906155   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:50:45.917743   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:50:45.922549   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:50:45.922604   96006 kubeadm.go:928] updating node {m02 192.168.39.236 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:50:45.922716   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:50:45.922752   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:50:45.922800   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:50:45.943173   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:50:45.943265   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:50:45.943322   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:50:45.956026   96006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0-rc.2': No such file or directory
	
	Initiating transfer...
	I0417 18:50:45.956087   96006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:50:45.967843   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0417 18:50:45.967881   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:50:45.967960   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:50:45.967964   96006 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet
	I0417 18:50:45.967984   96006 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm
	I0417 18:50:45.972622   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubectl': No such file or directory
	I0417 18:50:45.972648   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl (51454104 bytes)
	I0417 18:50:48.202574   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:50:48.202664   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:50:48.208132   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm': No such file or directory
	I0417 18:50:48.208171   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm (50249880 bytes)
	I0417 18:50:50.185607   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:50:50.201809   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:50:50.201893   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:50:50.207126   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet': No such file or directory
	I0417 18:50:50.207170   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet (100100024 bytes)
	I0417 18:50:50.649074   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0417 18:50:50.659340   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0417 18:50:50.678040   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:50:50.697112   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 18:50:50.715650   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:50:50.720043   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:50:50.733632   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:50:50.869987   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:50:50.888165   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:50.888697   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:50.888785   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:50.904309   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0417 18:50:50.904752   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:50.905310   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:50.905332   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:50.905622   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:50.905847   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:50.906027   96006 start.go:316] joinCluster: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 Cl
usterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:50:50.906152   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0417 18:50:50.906180   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:50.909437   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:50.909900   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:50.909928   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:50.910110   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:50.910342   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:50.910526   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:50.910722   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:51.055851   96006 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:51.055919   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt3ppm.k6jpgcx1jeyj2jt1 --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m02 --control-plane --apiserver-advertise-address=192.168.39.236 --apiserver-bind-port=8443"
	I0417 18:51:13.972353   96006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt3ppm.k6jpgcx1jeyj2jt1 --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m02 --control-plane --apiserver-advertise-address=192.168.39.236 --apiserver-bind-port=8443": (22.916397784s)
	I0417 18:51:13.972395   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0417 18:51:14.574803   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706-m02 minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=false
	I0417 18:51:14.717411   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-467706-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0417 18:51:14.893231   96006 start.go:318] duration metric: took 23.987209952s to joinCluster
	I0417 18:51:14.893330   96006 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:51:14.895174   96006 out.go:177] * Verifying Kubernetes components...
	I0417 18:51:14.893626   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:14.896844   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:51:15.183083   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:51:15.224796   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:51:15.225073   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0417 18:51:15.225173   96006 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.159:8443
	I0417 18:51:15.225692   96006 node_ready.go:35] waiting up to 6m0s for node "ha-467706-m02" to be "Ready" ...
	I0417 18:51:15.225869   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:15.225880   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:15.225892   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:15.225896   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:15.236705   96006 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0417 18:51:15.726883   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:15.726910   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:15.726922   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:15.726928   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:15.735017   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:16.226972   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:16.227006   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:16.227019   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:16.227024   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:16.231152   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:16.726335   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:16.726359   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:16.726368   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:16.726371   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:16.730277   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:17.226638   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:17.226669   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:17.226678   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:17.226681   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:17.230662   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:17.231373   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:17.726905   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:17.726929   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:17.726938   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:17.726941   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:17.731821   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:18.226785   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:18.226823   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:18.226835   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:18.226841   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:18.231456   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:18.726414   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:18.726439   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:18.726448   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:18.726451   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:18.742533   96006 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0417 18:51:19.225974   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:19.225999   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:19.226009   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:19.226014   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:19.231635   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:19.233042   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:19.726389   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:19.726411   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:19.726420   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:19.726425   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:19.729897   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:20.225875   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:20.225899   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:20.225907   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:20.225911   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:20.229100   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:20.726901   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:20.726924   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:20.726932   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:20.726936   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:20.730417   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:21.226005   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:21.226091   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:21.226109   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:21.226117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:21.230666   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:21.726785   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:21.726806   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:21.726813   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:21.726818   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:21.731509   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:21.732328   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:22.226721   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:22.226745   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:22.226756   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:22.226761   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:22.231754   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:22.726964   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:22.726997   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:22.727010   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:22.727018   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:22.735454   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:23.226383   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.226413   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.226428   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.226437   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.229878   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.230663   96006 node_ready.go:49] node "ha-467706-m02" has status "Ready":"True"
	I0417 18:51:23.230683   96006 node_ready.go:38] duration metric: took 8.0049708s for node "ha-467706-m02" to be "Ready" ...
	I0417 18:51:23.230694   96006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:51:23.230814   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:23.230827   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.230835   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.230838   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.239654   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:23.246313   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.246432   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-56dz8
	I0417 18:51:23.246443   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.246451   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.246454   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.249987   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.251471   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.251491   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.251498   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.251503   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.254142   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.254681   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.254699   96006 pod_ready.go:81] duration metric: took 8.360266ms for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.254707   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.254764   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kcdqn
	I0417 18:51:23.254773   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.254780   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.254784   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.257241   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.257923   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.257937   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.257944   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.257947   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.260197   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.260616   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.260631   96006 pod_ready.go:81] duration metric: took 5.918629ms for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.260640   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.260696   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706
	I0417 18:51:23.260705   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.260712   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.260723   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.263098   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.263593   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.263606   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.263612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.263616   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.266189   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.267242   96006 pod_ready.go:92] pod "etcd-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.267256   96006 pod_ready.go:81] duration metric: took 6.610637ms for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.267265   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.267312   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:23.267322   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.267328   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.267331   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.269674   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.270282   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.270294   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.270301   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.270304   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.272657   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.767707   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:23.767738   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.767746   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.767751   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.770927   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.771848   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.771867   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.771877   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.771882   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.774590   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:24.267451   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:24.267480   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.267493   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.267498   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.271213   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.272086   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:24.272102   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.272109   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.272114   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.275365   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.768469   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:24.768502   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.768514   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.768519   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.772128   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.773012   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:24.773028   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.773037   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.773041   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.776084   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.268038   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:25.268060   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.268068   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.268073   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.271535   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.272213   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:25.272231   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.272239   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.272244   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.275254   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:25.275716   96006 pod_ready.go:102] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:25.768203   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:25.768229   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.768239   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.768245   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.771806   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.772566   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:25.772611   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.772619   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.772625   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.775636   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:26.268063   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:26.268099   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.268124   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.268129   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.272010   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:26.272908   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:26.272926   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.272937   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.272942   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.275754   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:26.767796   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:26.767819   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.767827   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.767831   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.771440   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:26.772137   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:26.772157   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.772168   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.772174   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.775107   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.267560   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:27.267584   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.267592   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.267597   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.271553   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:27.272200   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.272217   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.272226   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.272229   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.274967   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.275841   96006 pod_ready.go:92] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:27.275858   96006 pod_ready.go:81] duration metric: took 4.008587704s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.275872   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.275926   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706
	I0417 18:51:27.275934   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.275941   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.275945   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.278731   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.279548   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:27.279562   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.279569   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.279572   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.282012   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.282523   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:27.282541   96006 pod_ready.go:81] duration metric: took 6.66288ms for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.282549   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.282597   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:27.282605   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.282612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.282616   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.285219   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.285913   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.285925   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.285932   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.285936   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.288353   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.782927   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:27.782951   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.782962   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.782966   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.787431   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:27.788870   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.788887   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.788895   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.788900   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.792215   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.283697   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:28.283728   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.283737   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.283741   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.287291   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.288522   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:28.288535   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.288545   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.288548   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.291872   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.782825   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:28.782848   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.782857   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.782862   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.787118   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:28.787874   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:28.787889   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.787897   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.787902   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.791077   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:29.283249   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:29.283280   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.283292   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.283297   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.288519   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:29.289409   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:29.289428   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.289440   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.289446   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.292965   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:29.293591   96006 pod_ready.go:102] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:29.782904   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:29.782929   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.782940   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.782944   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.787500   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:29.789301   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:29.789322   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.789333   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.789339   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.793831   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:30.283438   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:30.283462   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.283470   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.283474   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.286931   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:30.287797   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:30.287812   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.287820   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.287825   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.290475   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:30.783555   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:30.783578   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.783586   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.783591   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.787803   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:30.788747   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:30.788767   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.788804   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.788811   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.792158   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.283747   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:31.283777   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.283790   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.283795   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.288177   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:31.289081   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:31.289095   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.289103   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.289107   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.292276   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.783370   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:31.783402   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.783414   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.783420   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.786680   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.787486   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:31.787503   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.787510   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.787514   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.790453   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:31.790996   96006 pod_ready.go:102] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:32.283343   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:32.283366   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.283375   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.283380   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.287390   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.289101   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.289117   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.289129   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.289137   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.297645   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:32.298808   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.298825   96006 pod_ready.go:81] duration metric: took 5.016269549s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.298836   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.298896   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:51:32.298903   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.298911   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.298918   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.303341   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:32.304646   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.304662   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.304670   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.304675   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.307481   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.307982   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.308001   96006 pod_ready.go:81] duration metric: took 9.157988ms for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.308011   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.308072   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:51:32.308080   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.308087   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.308090   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.310622   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.311286   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.311300   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.311306   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.311309   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.314424   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.315460   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.315477   96006 pod_ready.go:81] duration metric: took 7.460114ms for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.315486   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.315539   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:51:32.315547   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.315554   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.315562   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.318568   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.426747   96006 request.go:629] Waited for 107.278257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.426836   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.426844   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.426855   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.426865   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.430214   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.430962   96006 pod_ready.go:92] pod "kube-proxy-hd469" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.430982   96006 pod_ready.go:81] duration metric: took 115.490294ms for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.430993   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.626388   96006 request.go:629] Waited for 195.326382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:51:32.626452   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:51:32.626458   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.626466   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.626478   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.630307   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.827284   96006 request.go:629] Waited for 196.366908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.827344   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.827349   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.827357   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.827364   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.831913   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:32.832435   96006 pod_ready.go:92] pod "kube-proxy-qxtf4" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.832455   96006 pod_ready.go:81] duration metric: took 401.454584ms for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.832469   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.026738   96006 request.go:629] Waited for 194.18784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:51:33.026803   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:51:33.026810   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.026821   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.026837   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.030357   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.226664   96006 request.go:629] Waited for 195.467878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:33.226745   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:33.226754   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.226763   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.226766   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.230848   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:33.231696   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:33.231721   96006 pod_ready.go:81] duration metric: took 399.24464ms for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.231736   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.426832   96006 request.go:629] Waited for 194.99926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:51:33.426911   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:51:33.426918   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.426927   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.426938   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.430894   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.626958   96006 request.go:629] Waited for 195.422824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:33.627025   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:33.627030   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.627038   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.627041   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.631026   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.631774   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:33.631795   96006 pod_ready.go:81] duration metric: took 400.050921ms for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.631806   96006 pod_ready.go:38] duration metric: took 10.401080338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:51:33.631822   96006 api_server.go:52] waiting for apiserver process to appear ...
	I0417 18:51:33.631879   96006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:51:33.648554   96006 api_server.go:72] duration metric: took 18.75517691s to wait for apiserver process to appear ...
	I0417 18:51:33.648600   96006 api_server.go:88] waiting for apiserver healthz status ...
	I0417 18:51:33.648626   96006 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0417 18:51:33.653277   96006 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0417 18:51:33.653354   96006 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I0417 18:51:33.653362   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.653370   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.653374   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.654340   96006 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0417 18:51:33.654453   96006 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 18:51:33.654471   96006 api_server.go:131] duration metric: took 5.864211ms to wait for apiserver health ...
	I0417 18:51:33.654480   96006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 18:51:33.826909   96006 request.go:629] Waited for 172.335088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:33.826975   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:33.826981   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.826989   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.826993   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.832448   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:33.837613   96006 system_pods.go:59] 17 kube-system pods found
	I0417 18:51:33.837640   96006 system_pods.go:61] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:51:33.837646   96006 system_pods.go:61] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:51:33.837650   96006 system_pods.go:61] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:51:33.837654   96006 system_pods.go:61] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:51:33.837659   96006 system_pods.go:61] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:51:33.837663   96006 system_pods.go:61] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:51:33.837668   96006 system_pods.go:61] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:51:33.837675   96006 system_pods.go:61] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:51:33.837681   96006 system_pods.go:61] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:51:33.837690   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:51:33.837698   96006 system_pods.go:61] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:51:33.837703   96006 system_pods.go:61] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:51:33.837709   96006 system_pods.go:61] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:51:33.837713   96006 system_pods.go:61] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:51:33.837718   96006 system_pods.go:61] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:51:33.837721   96006 system_pods.go:61] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:51:33.837726   96006 system_pods.go:61] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:51:33.837732   96006 system_pods.go:74] duration metric: took 183.242044ms to wait for pod list to return data ...
	I0417 18:51:33.837743   96006 default_sa.go:34] waiting for default service account to be created ...
	I0417 18:51:34.027238   96006 request.go:629] Waited for 189.408907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:51:34.027344   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:51:34.027356   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.027369   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.027387   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.031074   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:34.031378   96006 default_sa.go:45] found service account: "default"
	I0417 18:51:34.031406   96006 default_sa.go:55] duration metric: took 193.654279ms for default service account to be created ...
	I0417 18:51:34.031422   96006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 18:51:34.226919   96006 request.go:629] Waited for 195.41558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:34.227010   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:34.227019   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.227035   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.227045   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.233257   96006 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0417 18:51:34.238042   96006 system_pods.go:86] 17 kube-system pods found
	I0417 18:51:34.238071   96006 system_pods.go:89] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:51:34.238076   96006 system_pods.go:89] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:51:34.238081   96006 system_pods.go:89] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:51:34.238087   96006 system_pods.go:89] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:51:34.238093   96006 system_pods.go:89] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:51:34.238099   96006 system_pods.go:89] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:51:34.238104   96006 system_pods.go:89] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:51:34.238111   96006 system_pods.go:89] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:51:34.238121   96006 system_pods.go:89] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:51:34.238129   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:51:34.238135   96006 system_pods.go:89] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:51:34.238144   96006 system_pods.go:89] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:51:34.238152   96006 system_pods.go:89] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:51:34.238162   96006 system_pods.go:89] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:51:34.238171   96006 system_pods.go:89] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:51:34.238180   96006 system_pods.go:89] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:51:34.238189   96006 system_pods.go:89] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:51:34.238202   96006 system_pods.go:126] duration metric: took 206.768944ms to wait for k8s-apps to be running ...
	I0417 18:51:34.238215   96006 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 18:51:34.238275   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:51:34.255222   96006 system_svc.go:56] duration metric: took 16.993241ms WaitForService to wait for kubelet
	I0417 18:51:34.255257   96006 kubeadm.go:576] duration metric: took 19.361885993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:51:34.255285   96006 node_conditions.go:102] verifying NodePressure condition ...
	I0417 18:51:34.426766   96006 request.go:629] Waited for 171.388544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I0417 18:51:34.426844   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I0417 18:51:34.426849   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.426857   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.426862   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.430450   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:34.431543   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:51:34.431577   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:51:34.431593   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:51:34.431598   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:51:34.431605   96006 node_conditions.go:105] duration metric: took 176.314123ms to run NodePressure ...
	I0417 18:51:34.431620   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:51:34.431657   96006 start.go:254] writing updated cluster config ...
	I0417 18:51:34.433887   96006 out.go:177] 
	I0417 18:51:34.435625   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:34.435731   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:34.437530   96006 out.go:177] * Starting "ha-467706-m03" control-plane node in "ha-467706" cluster
	I0417 18:51:34.438965   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:51:34.438995   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:51:34.439105   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:51:34.439117   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:51:34.439238   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:34.439424   96006 start.go:360] acquireMachinesLock for ha-467706-m03: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:51:34.439471   96006 start.go:364] duration metric: took 26.368µs to acquireMachinesLock for "ha-467706-m03"
	I0417 18:51:34.439490   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:51:34.439582   96006 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0417 18:51:34.441053   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:51:34.441163   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:51:34.441212   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:51:34.456551   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I0417 18:51:34.457105   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:51:34.457624   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:51:34.457646   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:51:34.457978   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:51:34.458265   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:34.458431   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:34.458625   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:51:34.458663   96006 client.go:168] LocalClient.Create starting
	I0417 18:51:34.458711   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:51:34.458750   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:51:34.458773   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:51:34.458838   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:51:34.458866   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:51:34.458883   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:51:34.458907   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:51:34.458920   96006 main.go:141] libmachine: (ha-467706-m03) Calling .PreCreateCheck
	I0417 18:51:34.459099   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:34.459607   96006 main.go:141] libmachine: Creating machine...
	I0417 18:51:34.459627   96006 main.go:141] libmachine: (ha-467706-m03) Calling .Create
	I0417 18:51:34.459808   96006 main.go:141] libmachine: (ha-467706-m03) Creating KVM machine...
	I0417 18:51:34.461114   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found existing default KVM network
	I0417 18:51:34.461289   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found existing private KVM network mk-ha-467706
	I0417 18:51:34.461430   96006 main.go:141] libmachine: (ha-467706-m03) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 ...
	I0417 18:51:34.461457   96006 main.go:141] libmachine: (ha-467706-m03) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:51:34.461539   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.461418   96690 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:51:34.461604   96006 main.go:141] libmachine: (ha-467706-m03) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:51:34.702350   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.702215   96690 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa...
	I0417 18:51:34.869742   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.869577   96690 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/ha-467706-m03.rawdisk...
	I0417 18:51:34.869789   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Writing magic tar header
	I0417 18:51:34.869844   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Writing SSH key tar header
	I0417 18:51:34.869872   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 (perms=drwx------)
	I0417 18:51:34.869891   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.869703   96690 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 ...
	I0417 18:51:34.869911   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03
	I0417 18:51:34.869926   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:51:34.869933   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:51:34.869949   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:51:34.869960   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:51:34.869974   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:51:34.869987   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:51:34.869996   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:51:34.870021   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:51:34.870047   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home
	I0417 18:51:34.870063   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:51:34.870079   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:51:34.870089   96006 main.go:141] libmachine: (ha-467706-m03) Creating domain...
	I0417 18:51:34.870104   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Skipping /home - not owner
	I0417 18:51:34.871109   96006 main.go:141] libmachine: (ha-467706-m03) define libvirt domain using xml: 
	I0417 18:51:34.871128   96006 main.go:141] libmachine: (ha-467706-m03) <domain type='kvm'>
	I0417 18:51:34.871138   96006 main.go:141] libmachine: (ha-467706-m03)   <name>ha-467706-m03</name>
	I0417 18:51:34.871151   96006 main.go:141] libmachine: (ha-467706-m03)   <memory unit='MiB'>2200</memory>
	I0417 18:51:34.871163   96006 main.go:141] libmachine: (ha-467706-m03)   <vcpu>2</vcpu>
	I0417 18:51:34.871169   96006 main.go:141] libmachine: (ha-467706-m03)   <features>
	I0417 18:51:34.871176   96006 main.go:141] libmachine: (ha-467706-m03)     <acpi/>
	I0417 18:51:34.871183   96006 main.go:141] libmachine: (ha-467706-m03)     <apic/>
	I0417 18:51:34.871196   96006 main.go:141] libmachine: (ha-467706-m03)     <pae/>
	I0417 18:51:34.871206   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871248   96006 main.go:141] libmachine: (ha-467706-m03)   </features>
	I0417 18:51:34.871271   96006 main.go:141] libmachine: (ha-467706-m03)   <cpu mode='host-passthrough'>
	I0417 18:51:34.871280   96006 main.go:141] libmachine: (ha-467706-m03)   
	I0417 18:51:34.871291   96006 main.go:141] libmachine: (ha-467706-m03)   </cpu>
	I0417 18:51:34.871315   96006 main.go:141] libmachine: (ha-467706-m03)   <os>
	I0417 18:51:34.871336   96006 main.go:141] libmachine: (ha-467706-m03)     <type>hvm</type>
	I0417 18:51:34.871348   96006 main.go:141] libmachine: (ha-467706-m03)     <boot dev='cdrom'/>
	I0417 18:51:34.871358   96006 main.go:141] libmachine: (ha-467706-m03)     <boot dev='hd'/>
	I0417 18:51:34.871366   96006 main.go:141] libmachine: (ha-467706-m03)     <bootmenu enable='no'/>
	I0417 18:51:34.871375   96006 main.go:141] libmachine: (ha-467706-m03)   </os>
	I0417 18:51:34.871383   96006 main.go:141] libmachine: (ha-467706-m03)   <devices>
	I0417 18:51:34.871395   96006 main.go:141] libmachine: (ha-467706-m03)     <disk type='file' device='cdrom'>
	I0417 18:51:34.871411   96006 main.go:141] libmachine: (ha-467706-m03)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/boot2docker.iso'/>
	I0417 18:51:34.871424   96006 main.go:141] libmachine: (ha-467706-m03)       <target dev='hdc' bus='scsi'/>
	I0417 18:51:34.871435   96006 main.go:141] libmachine: (ha-467706-m03)       <readonly/>
	I0417 18:51:34.871442   96006 main.go:141] libmachine: (ha-467706-m03)     </disk>
	I0417 18:51:34.871452   96006 main.go:141] libmachine: (ha-467706-m03)     <disk type='file' device='disk'>
	I0417 18:51:34.871465   96006 main.go:141] libmachine: (ha-467706-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:51:34.871480   96006 main.go:141] libmachine: (ha-467706-m03)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/ha-467706-m03.rawdisk'/>
	I0417 18:51:34.871492   96006 main.go:141] libmachine: (ha-467706-m03)       <target dev='hda' bus='virtio'/>
	I0417 18:51:34.871500   96006 main.go:141] libmachine: (ha-467706-m03)     </disk>
	I0417 18:51:34.871509   96006 main.go:141] libmachine: (ha-467706-m03)     <interface type='network'>
	I0417 18:51:34.871517   96006 main.go:141] libmachine: (ha-467706-m03)       <source network='mk-ha-467706'/>
	I0417 18:51:34.871525   96006 main.go:141] libmachine: (ha-467706-m03)       <model type='virtio'/>
	I0417 18:51:34.871535   96006 main.go:141] libmachine: (ha-467706-m03)     </interface>
	I0417 18:51:34.871545   96006 main.go:141] libmachine: (ha-467706-m03)     <interface type='network'>
	I0417 18:51:34.871555   96006 main.go:141] libmachine: (ha-467706-m03)       <source network='default'/>
	I0417 18:51:34.871563   96006 main.go:141] libmachine: (ha-467706-m03)       <model type='virtio'/>
	I0417 18:51:34.871572   96006 main.go:141] libmachine: (ha-467706-m03)     </interface>
	I0417 18:51:34.871579   96006 main.go:141] libmachine: (ha-467706-m03)     <serial type='pty'>
	I0417 18:51:34.871585   96006 main.go:141] libmachine: (ha-467706-m03)       <target port='0'/>
	I0417 18:51:34.871615   96006 main.go:141] libmachine: (ha-467706-m03)     </serial>
	I0417 18:51:34.871633   96006 main.go:141] libmachine: (ha-467706-m03)     <console type='pty'>
	I0417 18:51:34.871648   96006 main.go:141] libmachine: (ha-467706-m03)       <target type='serial' port='0'/>
	I0417 18:51:34.871656   96006 main.go:141] libmachine: (ha-467706-m03)     </console>
	I0417 18:51:34.871666   96006 main.go:141] libmachine: (ha-467706-m03)     <rng model='virtio'>
	I0417 18:51:34.871693   96006 main.go:141] libmachine: (ha-467706-m03)       <backend model='random'>/dev/random</backend>
	I0417 18:51:34.871705   96006 main.go:141] libmachine: (ha-467706-m03)     </rng>
	I0417 18:51:34.871715   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871723   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871733   96006 main.go:141] libmachine: (ha-467706-m03)   </devices>
	I0417 18:51:34.871745   96006 main.go:141] libmachine: (ha-467706-m03) </domain>
	I0417 18:51:34.871751   96006 main.go:141] libmachine: (ha-467706-m03) 
	I0417 18:51:34.878872   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:20:46:22 in network default
	I0417 18:51:34.879554   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring networks are active...
	I0417 18:51:34.879573   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:34.880392   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring network default is active
	I0417 18:51:34.880661   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring network mk-ha-467706 is active
	I0417 18:51:34.881007   96006 main.go:141] libmachine: (ha-467706-m03) Getting domain xml...
	I0417 18:51:34.881716   96006 main.go:141] libmachine: (ha-467706-m03) Creating domain...
	I0417 18:51:36.106921   96006 main.go:141] libmachine: (ha-467706-m03) Waiting to get IP...
	I0417 18:51:36.107774   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.108244   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.108290   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.108235   96690 retry.go:31] will retry after 259.688955ms: waiting for machine to come up
	I0417 18:51:36.369919   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.370449   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.370484   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.370393   96690 retry.go:31] will retry after 263.833952ms: waiting for machine to come up
	I0417 18:51:36.636049   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.636520   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.636546   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.636478   96690 retry.go:31] will retry after 477.100713ms: waiting for machine to come up
	I0417 18:51:37.115192   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:37.115714   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:37.115748   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:37.115659   96690 retry.go:31] will retry after 585.751769ms: waiting for machine to come up
	I0417 18:51:37.703494   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:37.704022   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:37.704046   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:37.703971   96690 retry.go:31] will retry after 480.911798ms: waiting for machine to come up
	I0417 18:51:38.186810   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:38.187304   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:38.187336   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:38.187235   96690 retry.go:31] will retry after 741.971724ms: waiting for machine to come up
	I0417 18:51:38.931059   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:38.931460   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:38.931485   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:38.931420   96690 retry.go:31] will retry after 818.006613ms: waiting for machine to come up
	I0417 18:51:39.751433   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:39.751984   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:39.752015   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:39.751936   96690 retry.go:31] will retry after 1.076985012s: waiting for machine to come up
	I0417 18:51:40.830953   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:40.831445   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:40.831485   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:40.831385   96690 retry.go:31] will retry after 1.317961563s: waiting for machine to come up
	I0417 18:51:42.150497   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:42.150927   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:42.150950   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:42.150902   96690 retry.go:31] will retry after 1.665893506s: waiting for machine to come up
	I0417 18:51:43.818870   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:43.819324   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:43.819354   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:43.819268   96690 retry.go:31] will retry after 2.909952059s: waiting for machine to come up
	I0417 18:51:46.730539   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:46.731025   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:46.731049   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:46.730987   96690 retry.go:31] will retry after 3.59067388s: waiting for machine to come up
	I0417 18:51:50.322830   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:50.323323   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:50.323355   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:50.323263   96690 retry.go:31] will retry after 3.540199243s: waiting for machine to come up
	I0417 18:51:53.866714   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:53.867100   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:53.867130   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:53.867046   96690 retry.go:31] will retry after 3.58223567s: waiting for machine to come up
	I0417 18:51:57.450494   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.450992   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.451015   96006 main.go:141] libmachine: (ha-467706-m03) Found IP for machine: 192.168.39.250
	I0417 18:51:57.451069   96006 main.go:141] libmachine: (ha-467706-m03) Reserving static IP address...
	I0417 18:51:57.451464   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find host DHCP lease matching {name: "ha-467706-m03", mac: "52:54:00:93:9e:a9", ip: "192.168.39.250"} in network mk-ha-467706
	I0417 18:51:57.528115   96006 main.go:141] libmachine: (ha-467706-m03) Reserved static IP address: 192.168.39.250
	I0417 18:51:57.528155   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Getting to WaitForSSH function...
	I0417 18:51:57.528164   96006 main.go:141] libmachine: (ha-467706-m03) Waiting for SSH to be available...
	I0417 18:51:57.531251   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.531739   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.531764   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.531909   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using SSH client type: external
	I0417 18:51:57.531940   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa (-rw-------)
	I0417 18:51:57.531977   96006 main.go:141] libmachine: (ha-467706-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:51:57.531996   96006 main.go:141] libmachine: (ha-467706-m03) DBG | About to run SSH command:
	I0417 18:51:57.532010   96006 main.go:141] libmachine: (ha-467706-m03) DBG | exit 0
	I0417 18:51:57.660953   96006 main.go:141] libmachine: (ha-467706-m03) DBG | SSH cmd err, output: <nil>: 
	I0417 18:51:57.661254   96006 main.go:141] libmachine: (ha-467706-m03) KVM machine creation complete!
	I0417 18:51:57.661574   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:57.662186   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:57.662357   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:57.662518   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:51:57.662533   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:51:57.663657   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:51:57.663671   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:51:57.663677   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:51:57.663683   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.665980   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.666354   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.666381   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.666482   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.666701   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.666883   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.667009   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.667140   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.667390   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.667406   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:51:57.776433   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:51:57.776460   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:51:57.776470   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.779557   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.780066   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.780108   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.780214   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.780447   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.780635   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.780844   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.781095   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.781318   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.781334   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:51:57.890734   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:51:57.890841   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:51:57.890855   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:51:57.890869   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:57.891166   96006 buildroot.go:166] provisioning hostname "ha-467706-m03"
	I0417 18:51:57.891206   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:57.891442   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.894112   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.894576   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.894606   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.894749   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.894916   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.895074   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.895260   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.895416   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.895592   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.895604   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706-m03 && echo "ha-467706-m03" | sudo tee /etc/hostname
	I0417 18:51:58.022957   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706-m03
	
	I0417 18:51:58.022997   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.026141   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.026552   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.026585   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.026799   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.027009   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.027194   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.027452   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.027784   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.028015   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.028040   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:51:58.147117   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:51:58.147158   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:51:58.147176   96006 buildroot.go:174] setting up certificates
	I0417 18:51:58.147187   96006 provision.go:84] configureAuth start
	I0417 18:51:58.147197   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:58.147514   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:58.150495   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.150863   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.150904   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.151108   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.153368   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.153737   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.153757   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.153954   96006 provision.go:143] copyHostCerts
	I0417 18:51:58.153999   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:51:58.154045   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:51:58.154059   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:51:58.154141   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:51:58.154273   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:51:58.154307   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:51:58.154318   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:51:58.154358   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:51:58.154424   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:51:58.154448   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:51:58.154457   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:51:58.154489   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:51:58.154574   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706-m03 san=[127.0.0.1 192.168.39.250 ha-467706-m03 localhost minikube]
	I0417 18:51:58.356725   96006 provision.go:177] copyRemoteCerts
	I0417 18:51:58.356820   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:51:58.356856   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.359545   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.359943   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.359981   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.360159   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.360359   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.360546   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.360688   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:58.443213   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:51:58.443311   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:51:58.471921   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:51:58.472006   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 18:51:58.498008   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:51:58.498081   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:51:58.524744   96006 provision.go:87] duration metric: took 377.543951ms to configureAuth
	I0417 18:51:58.524790   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:51:58.525095   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:58.525185   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.527959   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.528315   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.528338   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.528543   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.528758   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.528953   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.529129   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.529297   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.529463   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.529477   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:51:58.811996   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:51:58.812039   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:51:58.812051   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetURL
	I0417 18:51:58.813614   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using libvirt version 6000000
	I0417 18:51:58.815988   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.816312   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.816337   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.816495   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:51:58.816517   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:51:58.816526   96006 client.go:171] duration metric: took 24.357851209s to LocalClient.Create
	I0417 18:51:58.816556   96006 start.go:167] duration metric: took 24.357933069s to libmachine.API.Create "ha-467706"
	I0417 18:51:58.816567   96006 start.go:293] postStartSetup for "ha-467706-m03" (driver="kvm2")
	I0417 18:51:58.816579   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:51:58.816597   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:58.816866   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:51:58.816890   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.819416   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.819788   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.819817   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.819908   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.820089   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.820264   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.820455   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:58.903423   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:51:58.907793   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:51:58.907829   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:51:58.907891   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:51:58.907963   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:51:58.907973   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:51:58.908078   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:51:58.917857   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:51:58.944503   96006 start.go:296] duration metric: took 127.921184ms for postStartSetup
	I0417 18:51:58.944571   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:58.945214   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:58.947772   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.948095   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.948138   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.948469   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:58.948684   96006 start.go:128] duration metric: took 24.509091391s to createHost
	I0417 18:51:58.948711   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.951031   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.951386   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.951416   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.951598   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.951807   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.951995   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.952112   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.952234   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.952417   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.952429   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:51:59.061669   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379919.025305300
	
	I0417 18:51:59.061695   96006 fix.go:216] guest clock: 1713379919.025305300
	I0417 18:51:59.061704   96006 fix.go:229] Guest: 2024-04-17 18:51:59.0253053 +0000 UTC Remote: 2024-04-17 18:51:58.948697509 +0000 UTC m=+156.376044537 (delta=76.607791ms)
	I0417 18:51:59.061723   96006 fix.go:200] guest clock delta is within tolerance: 76.607791ms
	I0417 18:51:59.061730   96006 start.go:83] releasing machines lock for "ha-467706-m03", held for 24.622249744s
	I0417 18:51:59.061754   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.062041   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:59.064824   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.065192   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.065231   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.067675   96006 out.go:177] * Found network options:
	I0417 18:51:59.069081   96006 out.go:177]   - NO_PROXY=192.168.39.159,192.168.39.236
	W0417 18:51:59.070321   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	W0417 18:51:59.070343   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:51:59.070360   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071000   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071241   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071364   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:51:59.071410   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	W0417 18:51:59.071447   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	W0417 18:51:59.071470   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:51:59.071539   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:51:59.071564   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:59.075438   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.075906   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076399   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.076426   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076632   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.076660   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:59.076671   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076837   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:59.076882   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:59.076982   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:59.077030   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:59.077184   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:59.077180   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:59.077333   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:59.312868   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:51:59.319779   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:51:59.319848   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:51:59.337213   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:51:59.337240   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:51:59.337303   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:51:59.356790   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:51:59.371164   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:51:59.371221   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:51:59.385299   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:51:59.399551   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:51:59.511492   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:51:59.680128   96006 docker.go:233] disabling docker service ...
	I0417 18:51:59.680200   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:51:59.695980   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:51:59.709911   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:51:59.853481   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:51:59.975362   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:51:59.990715   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:52:00.014556   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:52:00.014646   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.026160   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:52:00.026224   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.037683   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.049269   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.060422   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:52:00.072142   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.083153   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.103537   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.115363   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:52:00.125483   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:52:00.125543   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:52:00.140258   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:52:00.151232   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:00.284671   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:52:00.434033   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:52:00.434122   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:52:00.439963   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:52:00.440079   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:52:00.444073   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:52:00.487475   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:52:00.487559   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:52:00.516105   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:52:00.546634   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:52:00.548096   96006 out.go:177]   - env NO_PROXY=192.168.39.159
	I0417 18:52:00.549484   96006 out.go:177]   - env NO_PROXY=192.168.39.159,192.168.39.236
	I0417 18:52:00.550999   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:52:00.553930   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:52:00.554200   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:52:00.554219   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:52:00.554407   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:52:00.559384   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:52:00.573146   96006 mustload.go:65] Loading cluster: ha-467706
	I0417 18:52:00.573371   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:52:00.573651   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:00.573700   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:00.588993   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0417 18:52:00.589529   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:00.590035   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:00.590058   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:00.590436   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:00.590616   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:52:00.592317   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:52:00.592729   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:00.592804   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:00.607728   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0417 18:52:00.608165   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:00.608719   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:00.608750   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:00.609117   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:00.609294   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:52:00.609463   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.250
	I0417 18:52:00.609482   96006 certs.go:194] generating shared ca certs ...
	I0417 18:52:00.609497   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.609655   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:52:00.609709   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:52:00.609724   96006 certs.go:256] generating profile certs ...
	I0417 18:52:00.609820   96006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:52:00.609850   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9
	I0417 18:52:00.609869   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.250 192.168.39.254]
	I0417 18:52:00.749277   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 ...
	I0417 18:52:00.749320   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9: {Name:mk6143d78cf42a990aa606d474f97e8b4fd0619a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.749616   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9 ...
	I0417 18:52:00.749648   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9: {Name:mk3bb67b98f87c1d920beec47452d123a46411b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.749762   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:52:00.749897   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:52:00.750025   96006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:52:00.750042   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:52:00.750054   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:52:00.750068   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:52:00.750081   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:52:00.750098   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:52:00.750110   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:52:00.750122   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:52:00.750134   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:52:00.750185   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:52:00.750214   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:52:00.750223   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:52:00.750245   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:52:00.750267   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:52:00.750290   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:52:00.750324   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:52:00.750348   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:00.750362   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:52:00.750374   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:52:00.750409   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:52:00.753425   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:00.753882   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:52:00.753914   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:00.754091   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:52:00.754308   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:52:00.754433   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:52:00.754565   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:52:00.825222   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0417 18:52:00.830594   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0417 18:52:00.843572   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0417 18:52:00.848432   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0417 18:52:00.860990   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0417 18:52:00.868349   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0417 18:52:00.881489   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0417 18:52:00.886462   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0417 18:52:00.902096   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0417 18:52:00.907262   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0417 18:52:00.920906   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0417 18:52:00.926468   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0417 18:52:00.941097   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:52:00.969343   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:52:00.997366   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:52:01.024743   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:52:01.054703   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0417 18:52:01.082003   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 18:52:01.109194   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:52:01.136303   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:52:01.162022   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:52:01.187722   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:52:01.214107   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:52:01.241348   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0417 18:52:01.261810   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0417 18:52:01.282278   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0417 18:52:01.301494   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0417 18:52:01.320460   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0417 18:52:01.339415   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0417 18:52:01.357976   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0417 18:52:01.375690   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:52:01.381609   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:52:01.392798   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.397733   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.397804   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.403864   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:52:01.415745   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:52:01.428109   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.432877   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.432941   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.439267   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:52:01.450923   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:52:01.462324   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.467307   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.467382   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.473331   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:52:01.484248   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:52:01.488720   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:52:01.488810   96006 kubeadm.go:928] updating node {m03 192.168.39.250 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:52:01.488925   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:52:01.488961   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:52:01.489005   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:52:01.505434   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:52:01.505511   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:52:01.505571   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:52:01.517039   96006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0-rc.2': No such file or directory
	
	Initiating transfer...
	I0417 18:52:01.517114   96006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:52:01.528436   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm.sha256
	I0417 18:52:01.528460   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet.sha256
	I0417 18:52:01.528473   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:52:01.528483   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0417 18:52:01.528495   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:52:01.528516   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:52:01.528548   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:52:01.528549   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:52:01.533531   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubectl': No such file or directory
	I0417 18:52:01.533565   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl (51454104 bytes)
	I0417 18:52:01.556412   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:52:01.556557   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:52:01.569927   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm': No such file or directory
	I0417 18:52:01.569973   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm (50249880 bytes)
	I0417 18:52:01.615729   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet': No such file or directory
	I0417 18:52:01.615777   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet (100100024 bytes)
	I0417 18:52:02.505814   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0417 18:52:02.516731   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0417 18:52:02.537107   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:52:02.555786   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 18:52:02.574491   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:52:02.580118   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:52:02.594496   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:02.741413   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:52:02.761049   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:52:02.761373   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:02.761425   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:02.776757   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0417 18:52:02.777305   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:02.777802   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:02.777823   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:02.778163   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:02.778370   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:52:02.778550   96006 start.go:316] joinCluster: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 Cl
usterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:52:02.778748   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0417 18:52:02.778783   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:52:02.781899   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:02.782379   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:52:02.782406   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:02.782580   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:52:02.782768   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:52:02.782986   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:52:02.783174   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:52:02.957773   96006 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:52:02.957846   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1y032b.zfoaqppbodvod22o --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0417 18:52:27.749596   96006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1y032b.zfoaqppbodvod22o --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (24.791722831s)
	I0417 18:52:27.749646   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0417 18:52:28.385191   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706-m03 minikube.k8s.io/updated_at=2024_04_17T18_52_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=false
	I0417 18:52:28.504077   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-467706-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0417 18:52:28.621348   96006 start.go:318] duration metric: took 25.842793494s to joinCluster
	I0417 18:52:28.621444   96006 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:52:28.623236   96006 out.go:177] * Verifying Kubernetes components...
	I0417 18:52:28.621761   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:52:28.624642   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:28.898574   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:52:28.923168   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:52:28.923555   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0417 18:52:28.923651   96006 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.159:8443
	I0417 18:52:28.923966   96006 node_ready.go:35] waiting up to 6m0s for node "ha-467706-m03" to be "Ready" ...
	I0417 18:52:28.924090   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:28.924104   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:28.924115   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:28.924120   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:28.927651   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:29.424873   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:29.424902   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:29.424914   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:29.424921   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:29.428630   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:29.924950   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:29.924978   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:29.924989   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:29.924995   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:29.929352   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:30.424980   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:30.425005   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:30.425016   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:30.425021   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:30.433101   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:52:30.924634   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:30.924655   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:30.924663   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:30.924666   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:30.929216   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:30.930081   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:31.424251   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:31.424276   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:31.424287   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:31.424294   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:31.427884   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:31.925119   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:31.925141   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:31.925150   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:31.925153   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:31.929439   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:32.425059   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:32.425082   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:32.425090   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:32.425095   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:32.428735   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:32.925083   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:32.925105   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:32.925113   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:32.925117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:32.928326   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:33.424762   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:33.424797   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:33.424806   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:33.424810   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:33.428416   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:33.429153   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:33.924533   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:33.924556   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:33.924565   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:33.924569   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:33.928140   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:34.424439   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:34.424464   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:34.424472   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:34.424476   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:34.428193   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:34.925087   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:34.925111   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:34.925120   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:34.925125   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:34.928677   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:35.425172   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:35.425200   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:35.425210   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:35.425218   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:35.429160   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:35.429880   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:35.925045   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:35.925069   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:35.925081   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:35.925086   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:35.929103   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.424535   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:36.424572   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.424584   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.424592   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.428304   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.925072   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:36.925098   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.925107   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.925112   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.929080   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.929865   96006 node_ready.go:49] node "ha-467706-m03" has status "Ready":"True"
	I0417 18:52:36.929899   96006 node_ready.go:38] duration metric: took 8.005907796s for node "ha-467706-m03" to be "Ready" ...
	I0417 18:52:36.929921   96006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:52:36.930010   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:36.930023   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.930035   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.930040   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.936875   96006 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0417 18:52:36.943562   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.943658   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-56dz8
	I0417 18:52:36.943671   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.943682   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.943691   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.946981   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.947848   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.947865   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.947874   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.947877   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.951124   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.951882   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.951909   96006 pod_ready.go:81] duration metric: took 8.318595ms for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.951922   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.952004   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kcdqn
	I0417 18:52:36.952016   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.952025   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.952030   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.954946   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.956375   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.956391   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.956398   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.956402   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.960496   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:36.961176   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.961195   96006 pod_ready.go:81] duration metric: took 9.26631ms for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.961204   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.961424   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706
	I0417 18:52:36.961444   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.961453   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.961460   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.964198   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.964962   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.964980   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.964990   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.964996   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.968568   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.969736   96006 pod_ready.go:92] pod "etcd-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.969755   96006 pod_ready.go:81] duration metric: took 8.543441ms for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.969767   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.969836   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:52:36.969846   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.969856   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.969864   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.973322   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.974022   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:36.974039   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.974049   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.974056   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.977049   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.977876   96006 pod_ready.go:92] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.977893   96006 pod_ready.go:81] duration metric: took 8.118952ms for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.977902   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:37.125207   96006 request.go:629] Waited for 147.237578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.125304   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.125317   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.125327   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.125335   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.129265   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.325969   96006 request.go:629] Waited for 195.91734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.326062   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.326069   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.326081   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.326087   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.329735   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.525979   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.526008   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.526024   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.526030   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.530532   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:37.725519   96006 request.go:629] Waited for 194.121985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.725593   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.725601   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.725612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.725631   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.728991   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.978697   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.978723   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.978732   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.978736   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.982744   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.125959   96006 request.go:629] Waited for 142.267335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.126016   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.126020   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.126028   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.126033   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.129065   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.478351   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:38.478382   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.478393   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.478400   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.482019   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.526015   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.526040   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.526053   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.526057   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.530125   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:38.979130   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:38.979167   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.979181   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.979191   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.983274   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:38.984058   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.984074   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.984082   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.984087   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.987121   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.987824   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:39.478744   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:39.478767   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.478776   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.478780   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.482249   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:39.482930   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:39.482947   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.482955   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.482958   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.485711   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:39.978405   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:39.978428   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.978437   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.978441   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.983307   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:39.984232   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:39.984251   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.984260   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.984265   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.987576   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.478611   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:40.478635   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.478649   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.478654   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.482185   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.482957   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:40.482974   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.482982   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.482990   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.485806   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:40.978434   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:40.978458   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.978466   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.978470   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.982366   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.983312   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:40.983326   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.983334   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.983337   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.986184   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:41.478160   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:41.478186   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.478195   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.478205   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.482826   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:41.483571   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:41.483587   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.483597   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.483606   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.486864   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:41.487510   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:41.978993   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:41.979015   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.979031   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.979035   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.982762   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:41.983777   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:41.983799   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.983811   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.983817   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.987561   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.478573   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:42.478599   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.478609   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.478623   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.482464   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.483352   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:42.483371   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.483379   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.483384   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.486473   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.978778   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:42.978800   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.978808   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.978812   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.982956   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:42.984208   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:42.984229   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.984240   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.984245   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.987545   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:43.479123   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:43.479160   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.479173   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.479178   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.482599   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:43.483500   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:43.483519   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.483529   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.483538   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.488219   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:43.488884   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:43.978220   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:43.978245   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.978254   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.978258   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.982430   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:43.983082   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:43.983101   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.983111   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.983117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.986478   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.478768   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:44.478792   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.478800   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.478805   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.482390   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.482993   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:44.483009   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.483017   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.483022   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.486288   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.486885   96006 pod_ready.go:92] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.486907   96006 pod_ready.go:81] duration metric: took 7.508997494s for pod "etcd-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.486931   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.487003   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706
	I0417 18:52:44.487014   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.487024   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.487033   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.489839   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.490630   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.490648   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.490659   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.490666   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.493317   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.493787   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.493806   96006 pod_ready.go:81] duration metric: took 6.868213ms for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.493815   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.493866   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:52:44.493875   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.493881   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.493885   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.496618   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.497206   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:44.497221   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.497228   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.497232   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.499850   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.500328   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.500346   96006 pod_ready.go:81] duration metric: took 6.524042ms for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.500354   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.500398   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m03
	I0417 18:52:44.500406   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.500413   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.500416   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.502820   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.525635   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:44.525658   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.525668   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.525674   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.529459   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.529911   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.529931   96006 pod_ready.go:81] duration metric: took 29.571504ms for pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.529941   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.725298   96006 request.go:629] Waited for 195.288314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:52:44.725379   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:52:44.725389   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.725400   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.725411   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.729804   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:44.925812   96006 request.go:629] Waited for 195.39534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.925880   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.925888   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.925896   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.925913   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.929676   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.930553   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.930576   96006 pod_ready.go:81] duration metric: took 400.628962ms for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.930586   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.125857   96006 request.go:629] Waited for 195.191389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:52:45.125925   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:52:45.125932   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.125942   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.125948   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.129306   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.325424   96006 request.go:629] Waited for 195.384898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:45.325501   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:45.325507   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.325517   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.325525   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.329384   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.329913   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:45.329940   96006 pod_ready.go:81] duration metric: took 399.345281ms for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.329956   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.526037   96006 request.go:629] Waited for 195.983642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m03
	I0417 18:52:45.526113   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m03
	I0417 18:52:45.526118   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.526125   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.526129   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.529953   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.725683   96006 request.go:629] Waited for 195.070685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:45.725758   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:45.725766   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.725782   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.725790   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.729681   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.730371   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:45.730395   96006 pod_ready.go:81] duration metric: took 400.429466ms for pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.730409   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.925377   96006 request.go:629] Waited for 194.888012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:52:45.925484   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:52:45.925490   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.925498   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.925505   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.929973   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:46.125654   96006 request.go:629] Waited for 194.431513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:46.125734   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:46.125743   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.125755   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.125762   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.129903   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:46.130743   96006 pod_ready.go:92] pod "kube-proxy-hd469" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.130767   96006 pod_ready.go:81] duration metric: took 400.350111ms for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.130779   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jlcq7" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.325821   96006 request.go:629] Waited for 194.963898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jlcq7
	I0417 18:52:46.325910   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jlcq7
	I0417 18:52:46.325921   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.325931   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.325940   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.329752   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.526022   96006 request.go:629] Waited for 195.476053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:46.526102   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:46.526108   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.526116   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.526121   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.529860   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.530884   96006 pod_ready.go:92] pod "kube-proxy-jlcq7" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.530902   96006 pod_ready.go:81] duration metric: took 400.117162ms for pod "kube-proxy-jlcq7" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.530913   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.726192   96006 request.go:629] Waited for 195.191733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:52:46.726277   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:52:46.726285   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.726295   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.726299   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.729763   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.925722   96006 request.go:629] Waited for 195.405879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:46.925783   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:46.925788   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.925795   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.925803   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.929239   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.930056   96006 pod_ready.go:92] pod "kube-proxy-qxtf4" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.930085   96006 pod_ready.go:81] duration metric: took 399.165332ms for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.930101   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.126140   96006 request.go:629] Waited for 195.938622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:52:47.126218   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:52:47.126224   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.126232   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.126237   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.129831   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.325630   96006 request.go:629] Waited for 195.196456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:47.325703   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:47.325716   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.325735   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.325746   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.329278   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.330132   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:47.330153   96006 pod_ready.go:81] duration metric: took 400.03338ms for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.330165   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.525201   96006 request.go:629] Waited for 194.96066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:52:47.525294   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:52:47.525304   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.525312   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.525316   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.528656   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.725569   96006 request.go:629] Waited for 196.155088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:47.725626   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:47.725631   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.725639   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.725643   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.730074   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:47.730804   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:47.730829   96006 pod_ready.go:81] duration metric: took 400.655349ms for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.730843   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.925724   96006 request.go:629] Waited for 194.787766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m03
	I0417 18:52:47.925810   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m03
	I0417 18:52:47.925822   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.925829   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.925834   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.929948   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:48.125180   96006 request.go:629] Waited for 194.303544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:48.125265   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:48.125271   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.125280   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.125285   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.128743   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:48.129451   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:48.129480   96006 pod_ready.go:81] duration metric: took 398.623814ms for pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:48.129495   96006 pod_ready.go:38] duration metric: took 11.199554563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:52:48.129516   96006 api_server.go:52] waiting for apiserver process to appear ...
	I0417 18:52:48.129583   96006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:52:48.146310   96006 api_server.go:72] duration metric: took 19.524820593s to wait for apiserver process to appear ...
	I0417 18:52:48.146344   96006 api_server.go:88] waiting for apiserver healthz status ...
	I0417 18:52:48.146377   96006 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0417 18:52:48.153650   96006 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0417 18:52:48.153862   96006 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I0417 18:52:48.153876   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.153888   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.153894   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.154829   96006 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0417 18:52:48.154892   96006 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 18:52:48.154905   96006 api_server.go:131] duration metric: took 8.55388ms to wait for apiserver health ...
	I0417 18:52:48.154914   96006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 18:52:48.325249   96006 request.go:629] Waited for 170.263186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.325330   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.325336   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.325353   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.325361   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.332685   96006 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0417 18:52:48.339789   96006 system_pods.go:59] 24 kube-system pods found
	I0417 18:52:48.339819   96006 system_pods.go:61] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:52:48.339823   96006 system_pods.go:61] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:52:48.339827   96006 system_pods.go:61] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:52:48.339830   96006 system_pods.go:61] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:52:48.339833   96006 system_pods.go:61] "etcd-ha-467706-m03" [a79f9120-9c62-465e-8e06-97337c1eecd9] Running
	I0417 18:52:48.339836   96006 system_pods.go:61] "kindnet-5mvhn" [1d1c6ddb-22cf-489e-8958-41434cbf8b0c] Running
	I0417 18:52:48.339839   96006 system_pods.go:61] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:52:48.339842   96006 system_pods.go:61] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:52:48.339844   96006 system_pods.go:61] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:52:48.339848   96006 system_pods.go:61] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:52:48.339850   96006 system_pods.go:61] "kube-apiserver-ha-467706-m03" [69a7a929-d717-4c2f-9cca-c067dcc7610d] Running
	I0417 18:52:48.339853   96006 system_pods.go:61] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:52:48.339856   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:52:48.339860   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m03" [ae6eeeac-1ab7-4b22-8691-69d534d6d73e] Running
	I0417 18:52:48.339862   96006 system_pods.go:61] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:52:48.339865   96006 system_pods.go:61] "kube-proxy-jlcq7" [05590f74-8ea6-42ef-9d72-33e15cfd3a32] Running
	I0417 18:52:48.339868   96006 system_pods.go:61] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:52:48.339870   96006 system_pods.go:61] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:52:48.339873   96006 system_pods.go:61] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:52:48.339876   96006 system_pods.go:61] "kube-scheduler-ha-467706-m03" [94c0d749-a1da-468b-baee-25f5177376e5] Running
	I0417 18:52:48.339878   96006 system_pods.go:61] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:52:48.339881   96006 system_pods.go:61] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:52:48.339884   96006 system_pods.go:61] "kube-vip-ha-467706-m03" [75d15d4b-ed49-4d98-aecd-713bead1e281] Running
	I0417 18:52:48.339887   96006 system_pods.go:61] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:52:48.339894   96006 system_pods.go:74] duration metric: took 184.973208ms to wait for pod list to return data ...
	I0417 18:52:48.339904   96006 default_sa.go:34] waiting for default service account to be created ...
	I0417 18:52:48.525253   96006 request.go:629] Waited for 185.260041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:52:48.525328   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:52:48.525334   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.525343   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.525347   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.529005   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:48.529165   96006 default_sa.go:45] found service account: "default"
	I0417 18:52:48.529185   96006 default_sa.go:55] duration metric: took 189.273406ms for default service account to be created ...
	I0417 18:52:48.529198   96006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 18:52:48.725658   96006 request.go:629] Waited for 196.383891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.725738   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.725744   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.725752   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.725757   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.732978   96006 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0417 18:52:48.738859   96006 system_pods.go:86] 24 kube-system pods found
	I0417 18:52:48.738891   96006 system_pods.go:89] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:52:48.738897   96006 system_pods.go:89] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:52:48.738902   96006 system_pods.go:89] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:52:48.738906   96006 system_pods.go:89] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:52:48.738910   96006 system_pods.go:89] "etcd-ha-467706-m03" [a79f9120-9c62-465e-8e06-97337c1eecd9] Running
	I0417 18:52:48.738914   96006 system_pods.go:89] "kindnet-5mvhn" [1d1c6ddb-22cf-489e-8958-41434cbf8b0c] Running
	I0417 18:52:48.738918   96006 system_pods.go:89] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:52:48.738922   96006 system_pods.go:89] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:52:48.738927   96006 system_pods.go:89] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:52:48.738931   96006 system_pods.go:89] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:52:48.738935   96006 system_pods.go:89] "kube-apiserver-ha-467706-m03" [69a7a929-d717-4c2f-9cca-c067dcc7610d] Running
	I0417 18:52:48.738939   96006 system_pods.go:89] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:52:48.738943   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:52:48.738948   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m03" [ae6eeeac-1ab7-4b22-8691-69d534d6d73e] Running
	I0417 18:52:48.738954   96006 system_pods.go:89] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:52:48.738958   96006 system_pods.go:89] "kube-proxy-jlcq7" [05590f74-8ea6-42ef-9d72-33e15cfd3a32] Running
	I0417 18:52:48.738965   96006 system_pods.go:89] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:52:48.738969   96006 system_pods.go:89] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:52:48.738975   96006 system_pods.go:89] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:52:48.738980   96006 system_pods.go:89] "kube-scheduler-ha-467706-m03" [94c0d749-a1da-468b-baee-25f5177376e5] Running
	I0417 18:52:48.738992   96006 system_pods.go:89] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:52:48.738995   96006 system_pods.go:89] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:52:48.738999   96006 system_pods.go:89] "kube-vip-ha-467706-m03" [75d15d4b-ed49-4d98-aecd-713bead1e281] Running
	I0417 18:52:48.739002   96006 system_pods.go:89] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:52:48.739011   96006 system_pods.go:126] duration metric: took 209.807818ms to wait for k8s-apps to be running ...
	I0417 18:52:48.739021   96006 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 18:52:48.739068   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:52:48.758130   96006 system_svc.go:56] duration metric: took 19.098207ms WaitForService to wait for kubelet
	I0417 18:52:48.758166   96006 kubeadm.go:576] duration metric: took 20.136683772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:52:48.758192   96006 node_conditions.go:102] verifying NodePressure condition ...
	I0417 18:52:48.925633   96006 request.go:629] Waited for 167.350815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I0417 18:52:48.925711   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I0417 18:52:48.925717   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.925740   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.925762   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.930050   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:48.931223   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931267   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931278   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931282   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931285   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931288   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931292   96006 node_conditions.go:105] duration metric: took 173.095338ms to run NodePressure ...
	I0417 18:52:48.931304   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:52:48.931341   96006 start.go:254] writing updated cluster config ...
	I0417 18:52:48.931624   96006 ssh_runner.go:195] Run: rm -f paused
	I0417 18:52:48.985138   96006 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 18:52:48.987244   96006 out.go:177] * Done! kubectl is now configured to use "ha-467706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.673846708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380176673821265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9bb6f29-c51c-4b70-a9ae-c8e653df8622 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.674691058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0a3bbd6-77a9-461f-9b14-2ffd56539b2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.674865823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0a3bbd6-77a9-461f-9b14-2ffd56539b2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.675258580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0a3bbd6-77a9-461f-9b14-2ffd56539b2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.721348127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b68067d-9614-4d83-a3b2-5c91c65a1579 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.721425182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b68067d-9614-4d83-a3b2-5c91c65a1579 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.723580976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4794c32-f0e5-4347-b67b-aaed19e20956 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.723998012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380176723973392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4794c32-f0e5-4347-b67b-aaed19e20956 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.724569381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a078ee1-0044-4e4c-b0d5-e949db3d03fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.724653239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a078ee1-0044-4e4c-b0d5-e949db3d03fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.724989321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a078ee1-0044-4e4c-b0d5-e949db3d03fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.766160136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d09b5f8-9a4e-4037-8d7f-1510e4272294 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.766257313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d09b5f8-9a4e-4037-8d7f-1510e4272294 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.767797327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d23c1a00-ea9c-409b-ba22-fbbd64cfcc44 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.768325545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380176768298977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d23c1a00-ea9c-409b-ba22-fbbd64cfcc44 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.768860808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=511545cb-050e-4f55-b9e7-5962f1067e2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.768941691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=511545cb-050e-4f55-b9e7-5962f1067e2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.769211735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=511545cb-050e-4f55-b9e7-5962f1067e2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.811781828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ece06ace-7a11-4039-b8ef-5e43d65977eb name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.811882355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ece06ace-7a11-4039-b8ef-5e43d65977eb name=/runtime.v1.RuntimeService/Version
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.812965652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc6ebb06-28e6-44a7-a714-46cb945fdaa2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.813643861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380176813450605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc6ebb06-28e6-44a7-a714-46cb945fdaa2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.814305240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0c72d74-3592-4b10-9c0b-8f378e77f496 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.814356566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0c72d74-3592-4b10-9c0b-8f378e77f496 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:56:16 ha-467706 crio[683]: time="2024-04-17 18:56:16.814654850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0c72d74-3592-4b10-9c0b-8f378e77f496 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	93e18e5085cb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1b57101a3681c       busybox-fc5497c4f-r65s7
	23e69dba1da3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b3b3a885a73fc       storage-provisioner
	143bf06c19825       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   0836a6cd9f827       coredns-7db6d8ff4d-kcdqn
	56dd0755cda79       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   2887673f339d8       coredns-7db6d8ff4d-56dz8
	2f2ed526ef2f9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   a4a14e5274f05       kindnet-hspjv
	fe8aab67cc372       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      5 minutes ago       Running             kube-proxy                0                   269ac099b43b4       kube-proxy-hd469
	c2e7dc14e0398       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   209f90ed9f3f5       kube-vip-ha-467706
	0b4b6b19cdcea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   167b41d6ec7a7       etcd-ha-467706
	d1e96d91894cf       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      6 minutes ago       Running             kube-apiserver            0                   2d8b6f55b0eab       kube-apiserver-ha-467706
	644754e2725b2       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      6 minutes ago       Running             kube-controller-manager   0                   844fdf54706b9       kube-controller-manager-ha-467706
	7f539c70ed4df       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      6 minutes ago       Running             kube-scheduler            0                   18f84a94ee364       kube-scheduler-ha-467706
	
	
	==> coredns [143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0] <==
	[INFO] 10.244.2.2:44486 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000369814s
	[INFO] 10.244.2.2:33799 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190447s
	[INFO] 10.244.0.4:52709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115254s
	[INFO] 10.244.0.4:45280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018129s
	[INFO] 10.244.0.4:55894 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001481686s
	[INFO] 10.244.0.4:41971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086394s
	[INFO] 10.244.1.2:45052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204533s
	[INFO] 10.244.1.2:56976 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00191173s
	[INFO] 10.244.1.2:48269 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205618s
	[INFO] 10.244.1.2:41050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145556s
	[INFO] 10.244.1.2:40399 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367129s
	[INFO] 10.244.1.2:34908 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024876s
	[INFO] 10.244.1.2:33490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115098s
	[INFO] 10.244.1.2:43721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162828s
	[INFO] 10.244.2.2:52076 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338786s
	[INFO] 10.244.0.4:58146 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084273s
	[INFO] 10.244.0.4:46620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011163s
	[INFO] 10.244.1.2:55749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161622s
	[INFO] 10.244.1.2:50475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112723s
	[INFO] 10.244.2.2:58296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123831s
	[INFO] 10.244.2.2:42756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149112s
	[INFO] 10.244.2.2:44779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135979s
	[INFO] 10.244.0.4:32859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254227s
	[INFO] 10.244.0.4:39694 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091483s
	[INFO] 10.244.1.2:48582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162571s
	
	
	==> coredns [56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230] <==
	[INFO] 10.244.1.2:40690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216989s
	[INFO] 10.244.1.2:51761 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001998901s
	[INFO] 10.244.2.2:55936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277709s
	[INFO] 10.244.2.2:59321 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021403442s
	[INFO] 10.244.2.2:33112 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000361777s
	[INFO] 10.244.2.2:44063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012356307s
	[INFO] 10.244.2.2:52058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226126s
	[INFO] 10.244.2.2:45346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192986s
	[INFO] 10.244.0.4:42980 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001884522s
	[INFO] 10.244.0.4:33643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177169s
	[INFO] 10.244.0.4:55640 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105826s
	[INFO] 10.244.0.4:54019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112453s
	[INFO] 10.244.2.2:41133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144651s
	[INFO] 10.244.2.2:59362 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099749s
	[INFO] 10.244.2.2:32859 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102166s
	[INFO] 10.244.0.4:33356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105006s
	[INFO] 10.244.0.4:56803 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133644s
	[INFO] 10.244.1.2:34244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104241s
	[INFO] 10.244.1.2:43628 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148576s
	[INFO] 10.244.2.2:50718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000190649s
	[INFO] 10.244.0.4:44677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013354s
	[INFO] 10.244.0.4:45227 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159231s
	[INFO] 10.244.1.2:46121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135561s
	[INFO] 10.244.1.2:43459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116038s
	[INFO] 10.244.1.2:34953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088316s
	
	
	==> describe nodes <==
	Name:               ha-467706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:56:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    ha-467706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3208cc9eadd3453fab86398575c87f4f
	  System UUID:                3208cc9e-add3-453f-ab86-398575c87f4f
	  Boot ID:                    142d9103-8e77-48a0-a260-5d3c6e2e5842
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r65s7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 coredns-7db6d8ff4d-56dz8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m
	  kube-system                 coredns-7db6d8ff4d-kcdqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m
	  kube-system                 etcd-ha-467706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 kindnet-hspjv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-467706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-controller-manager-ha-467706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-proxy-hd469                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-467706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-467706                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m59s  kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-467706 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-467706 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-467706 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m1s   node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-467706 status is now: NodeReady
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal  RegisteredNode           3m34s  node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	
	
	Name:               ha-467706-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:51:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:53:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.236
	  Hostname:    ha-467706-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f49a89e0b0d7432fa507fd1ad108778d
	  System UUID:                f49a89e0-b0d7-432f-a507-fd1ad108778d
	  Boot ID:                    a312ddbf-6416-4cd3-b83f-4a865cbb9daf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xg855                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 etcd-ha-467706-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-k6b9s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m6s
	  kube-system                 kube-apiserver-ha-467706-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-467706-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-qxtf4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-ha-467706-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-467706-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           3m34s                node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  NodeNotReady             98s                  node-controller  Node ha-467706-m02 status is now: NodeNotReady
	
	
	Name:               ha-467706-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_52_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:52:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:56:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-467706-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f9c357ef5b24ca6b2e9c8c989ff32f8
	  System UUID:                6f9c357e-f5b2-4ca6-b2e9-c8c989ff32f8
	  Boot ID:                    920b7140-01bb-49d7-ab98-44319db0cc1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzsn2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 etcd-ha-467706-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-5mvhn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-apiserver-ha-467706-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-ha-467706-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-jlcq7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-ha-467706-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-vip-ha-467706-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node ha-467706-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	
	
	Name:               ha-467706-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_53_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:53:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:56:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-467706-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd00fe12f3b54ed0af7c6ee4cc75cc20
	  System UUID:                dd00fe12-f3b5-4ed0-af7c-6ee4cc75cc20
	  Boot ID:                    af4f4186-bd72-4c86-9e7c-b804dc030414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-v8r5k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-c7znr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-467706-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr17 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053092] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.561690] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796176] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.989146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.404684] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.063801] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068254] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163928] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151596] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.299739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.466528] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062588] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.060993] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.990848] kauditd_printk_skb: 62 callbacks suppressed
	[Apr17 18:50] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.084981] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.627165] kauditd_printk_skb: 21 callbacks suppressed
	[Apr17 18:51] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04] <==
	{"level":"warn","ts":"2024-04-17T18:56:17.152416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.16902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.180267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.180681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.18571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.190394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.202172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.219472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.226704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.230359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.23359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.234409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.246851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.249002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.251621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.257322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.26452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.268942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.272368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.278862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.280563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.286274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.293376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.328161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:56:17.330184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:56:17 up 6 min,  0 users,  load average: 0.26, 0.14, 0.06
	Linux ha-467706 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d] <==
	I0417 18:55:39.560812       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:55:49.570846       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:55:49.570893       1 main.go:227] handling current node
	I0417 18:55:49.570903       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:55:49.570909       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:55:49.571009       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:55:49.571039       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:55:49.571144       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:55:49.571170       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:55:59.577627       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:55:59.577729       1 main.go:227] handling current node
	I0417 18:55:59.577753       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:55:59.577771       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:55:59.577892       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:55:59.577912       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:55:59.577977       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:55:59.577995       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:56:09.594417       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:56:09.594460       1 main.go:227] handling current node
	I0417 18:56:09.594471       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:56:09.594477       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:56:09.594574       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:56:09.594579       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:56:09.594623       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:56:09.594661       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f] <==
	I0417 18:50:04.318202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 18:50:04.355194       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0417 18:50:04.378610       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 18:50:16.846627       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0417 18:50:16.894909       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0417 18:52:26.096584       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0417 18:52:26.096677       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0417 18:52:26.096707       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.196µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0417 18:52:26.098071       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0417 18:52:26.098305       1 timeout.go:142] post-timeout activity - time-elapsed: 1.438942ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0417 18:52:53.633381       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32846: use of closed network connection
	E0417 18:52:53.851620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55360: use of closed network connection
	E0417 18:52:54.066049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55380: use of closed network connection
	E0417 18:52:54.312073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55394: use of closed network connection
	E0417 18:52:54.530310       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0417 18:52:54.749321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55418: use of closed network connection
	E0417 18:52:54.970072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55426: use of closed network connection
	E0417 18:52:55.170376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55456: use of closed network connection
	E0417 18:52:55.363232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55474: use of closed network connection
	E0417 18:52:55.704964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55496: use of closed network connection
	E0417 18:52:55.917674       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55522: use of closed network connection
	E0417 18:52:56.149324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55548: use of closed network connection
	E0417 18:52:56.358780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55554: use of closed network connection
	E0417 18:52:56.573732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55562: use of closed network connection
	W0417 18:54:12.608672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.250]
	
	
	==> kube-controller-manager [644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e] <==
	I0417 18:51:16.104067       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m02"
	I0417 18:52:25.246945       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-467706-m03\" does not exist"
	I0417 18:52:25.263499       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-467706-m03" podCIDRs=["10.244.2.0/24"]
	I0417 18:52:26.133648       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m03"
	I0417 18:52:49.983995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.444635ms"
	I0417 18:52:50.033279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.02233ms"
	I0417 18:52:50.161420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.074688ms"
	I0417 18:52:50.383994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.50029ms"
	I0417 18:52:50.433875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.821909ms"
	I0417 18:52:50.434037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.115µs"
	I0417 18:52:51.508597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.721µs"
	I0417 18:52:52.536956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.409602ms"
	I0417 18:52:52.537269       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.901µs"
	I0417 18:52:53.029529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.842838ms"
	I0417 18:52:53.029630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.918µs"
	I0417 18:52:53.104812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.029182ms"
	I0417 18:52:53.105468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.414µs"
	E0417 18:53:24.007584       1 certificate_controller.go:146] Sync csr-gt8sv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gt8sv": the object has been modified; please apply your changes to the latest version and try again
	I0417 18:53:24.279715       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-467706-m04\" does not exist"
	I0417 18:53:24.315391       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-467706-m04" podCIDRs=["10.244.3.0/24"]
	I0417 18:53:26.172529       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m04"
	I0417 18:53:34.662934       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-467706-m04"
	I0417 18:54:39.012020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-467706-m04"
	I0417 18:54:39.148264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.816388ms"
	I0417 18:54:39.148404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.341µs"
	
	
	==> kube-proxy [fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1] <==
	I0417 18:50:18.091985       1 server_linux.go:69] "Using iptables proxy"
	I0417 18:50:18.123753       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0417 18:50:18.174594       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 18:50:18.174693       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 18:50:18.174745       1 server_linux.go:165] "Using iptables Proxier"
	I0417 18:50:18.177785       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 18:50:18.178232       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 18:50:18.178431       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 18:50:18.179602       1 config.go:192] "Starting service config controller"
	I0417 18:50:18.179652       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 18:50:18.179690       1 config.go:101] "Starting endpoint slice config controller"
	I0417 18:50:18.179706       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 18:50:18.180692       1 config.go:319] "Starting node config controller"
	I0417 18:50:18.180731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 18:50:18.279797       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 18:50:18.279873       1 shared_informer.go:320] Caches are synced for service config
	I0417 18:50:18.281317       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c] <==
	W0417 18:50:01.055262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 18:50:01.058446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 18:50:01.902822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 18:50:01.902927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 18:50:01.966916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:50:01.967036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 18:50:01.967229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 18:50:01.967646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 18:50:02.044397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 18:50:02.045012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 18:50:02.094905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0417 18:50:02.096485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0417 18:50:02.376262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 18:50:02.376346       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0417 18:50:04.574514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0417 18:52:25.323167       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5mvhn\": pod kindnet-5mvhn is already assigned to node \"ha-467706-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-5mvhn" node="ha-467706-m03"
	E0417 18:52:25.323308       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1d1c6ddb-22cf-489e-8958-41434cbf8b0c(kube-system/kindnet-5mvhn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5mvhn"
	E0417 18:52:25.323334       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5mvhn\": pod kindnet-5mvhn is already assigned to node \"ha-467706-m03\"" pod="kube-system/kindnet-5mvhn"
	I0417 18:52:25.323383       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5mvhn" node="ha-467706-m03"
	E0417 18:52:25.324325       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jlcq7\": pod kube-proxy-jlcq7 is already assigned to node \"ha-467706-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jlcq7" node="ha-467706-m03"
	E0417 18:52:25.324409       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 05590f74-8ea6-42ef-9d72-33e15cfd3a32(kube-system/kube-proxy-jlcq7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jlcq7"
	E0417 18:52:25.324432       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jlcq7\": pod kube-proxy-jlcq7 is already assigned to node \"ha-467706-m03\"" pod="kube-system/kube-proxy-jlcq7"
	I0417 18:52:25.324451       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jlcq7" node="ha-467706-m03"
	E0417 18:53:24.402687       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wth9x\": pod kindnet-wth9x is already assigned to node \"ha-467706-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wth9x" node="ha-467706-m04"
	E0417 18:53:24.402914       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wth9x\": pod kindnet-wth9x is already assigned to node \"ha-467706-m04\"" pod="kube-system/kindnet-wth9x"
	
	
	==> kubelet <==
	Apr 17 18:52:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:52:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:52:49 ha-467706 kubelet[1377]: I0417 18:52:49.986563    1377 topology_manager.go:215] "Topology Admit Handler" podUID="a14d0b32-aa41-4396-a2f7-643e8e32d96d" podNamespace="default" podName="busybox-fc5497c4f-r65s7"
	Apr 17 18:52:50 ha-467706 kubelet[1377]: I0417 18:52:50.107964    1377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zvl\" (UniqueName: \"kubernetes.io/projected/a14d0b32-aa41-4396-a2f7-643e8e32d96d-kube-api-access-92zvl\") pod \"busybox-fc5497c4f-r65s7\" (UID: \"a14d0b32-aa41-4396-a2f7-643e8e32d96d\") " pod="default/busybox-fc5497c4f-r65s7"
	Apr 17 18:52:56 ha-467706 kubelet[1377]: E0417 18:52:56.359400    1377 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47598->127.0.0.1:33177: write tcp 127.0.0.1:47598->127.0.0.1:33177: write: broken pipe
	Apr 17 18:53:04 ha-467706 kubelet[1377]: E0417 18:53:04.307805    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:53:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:54:04 ha-467706 kubelet[1377]: E0417 18:54:04.306427    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:54:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:55:04 ha-467706 kubelet[1377]: E0417 18:55:04.305151    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:55:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:56:04 ha-467706 kubelet[1377]: E0417 18:56:04.307844    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:56:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-467706 -n ha-467706
helpers_test.go:261: (dbg) Run:  kubectl --context ha-467706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (3.188529512s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:22.049338  100401 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:22.049570  100401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:22.049578  100401 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:22.049582  100401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:22.049770  100401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:22.049952  100401 out.go:298] Setting JSON to false
	I0417 18:56:22.049984  100401 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:22.050090  100401 notify.go:220] Checking for updates...
	I0417 18:56:22.050333  100401 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:22.050348  100401 status.go:255] checking status of ha-467706 ...
	I0417 18:56:22.050747  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.050801  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.066048  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I0417 18:56:22.066601  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.067286  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.067332  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.067757  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.067995  100401 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:22.069712  100401 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:22.069739  100401 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:22.070078  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.070133  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.085702  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0417 18:56:22.086255  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.086842  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.086872  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.087194  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.087364  100401 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:22.090274  100401 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:22.090725  100401 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:22.090755  100401 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:22.090840  100401 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:22.091246  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.091307  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.106195  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0417 18:56:22.106649  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.107257  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.107285  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.107632  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.107874  100401 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:22.108089  100401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:22.108115  100401 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:22.111120  100401 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:22.111585  100401 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:22.111610  100401 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:22.111770  100401 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:22.111964  100401 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:22.112126  100401 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:22.112258  100401 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:22.189864  100401 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:22.196713  100401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:22.216230  100401 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:22.216270  100401 api_server.go:166] Checking apiserver status ...
	I0417 18:56:22.216305  100401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:22.234490  100401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:22.247045  100401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:22.247123  100401 ssh_runner.go:195] Run: ls
	I0417 18:56:22.252873  100401 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:22.259517  100401 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:22.259544  100401 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:22.259554  100401 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:22.259598  100401 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:22.259928  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.259977  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.274909  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0417 18:56:22.275332  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.275876  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.275905  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.276245  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.276437  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:22.278037  100401 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:22.278057  100401 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:22.278345  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.278380  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.292881  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0417 18:56:22.293810  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.295170  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.295195  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.295544  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.295768  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:22.298721  100401 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:22.299121  100401 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:22.299147  100401 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:22.299343  100401 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:22.299654  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:22.299697  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:22.316382  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I0417 18:56:22.316891  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:22.317384  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:22.317406  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:22.317720  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:22.317920  100401 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:22.318128  100401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:22.318163  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:22.320729  100401 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:22.321149  100401 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:22.321175  100401 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:22.321364  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:22.321545  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:22.321700  100401 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:22.321856  100401 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:24.817079  100401 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:24.817213  100401 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:24.817234  100401 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:24.817247  100401 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:24.817305  100401 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:24.817318  100401 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:24.817658  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:24.817717  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:24.833447  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
	I0417 18:56:24.833910  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:24.834409  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:24.834434  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:24.834820  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:24.835036  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:24.836735  100401 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:24.836752  100401 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:24.837145  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:24.837197  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:24.852433  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0417 18:56:24.852898  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:24.853384  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:24.853410  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:24.853771  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:24.853962  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:24.857031  100401 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:24.857534  100401 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:24.857564  100401 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:24.857735  100401 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:24.858076  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:24.858121  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:24.873567  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0417 18:56:24.874064  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:24.874659  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:24.874678  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:24.875014  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:24.875215  100401 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:24.875435  100401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:24.875454  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:24.878616  100401 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:24.879117  100401 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:24.879152  100401 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:24.879361  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:24.879553  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:24.879702  100401 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:24.879844  100401 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:24.965570  100401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:24.982983  100401 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:24.983012  100401 api_server.go:166] Checking apiserver status ...
	I0417 18:56:24.983051  100401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:24.998528  100401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:25.011664  100401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:25.011725  100401 ssh_runner.go:195] Run: ls
	I0417 18:56:25.017166  100401 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:25.023385  100401 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:25.023416  100401 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:25.023424  100401 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:25.023440  100401 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:25.023722  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:25.023770  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:25.038802  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0417 18:56:25.039279  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:25.039790  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:25.039815  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:25.040188  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:25.040397  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:25.041953  100401 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:25.041975  100401 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:25.042258  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:25.042292  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:25.057361  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0417 18:56:25.057845  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:25.058409  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:25.058460  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:25.058794  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:25.059032  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:25.062008  100401 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:25.062549  100401 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:25.062592  100401 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:25.062686  100401 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:25.063020  100401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:25.063068  100401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:25.078482  100401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0417 18:56:25.078979  100401 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:25.079531  100401 main.go:141] libmachine: Using API Version  1
	I0417 18:56:25.079558  100401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:25.079895  100401 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:25.080095  100401 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:25.080322  100401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:25.080350  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:25.083504  100401 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:25.084008  100401 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:25.084043  100401 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:25.084202  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:25.084390  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:25.084554  100401 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:25.084703  100401 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:25.164927  100401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:25.180685  100401 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (5.395670672s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:25.983846  100496 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:25.983970  100496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:25.983980  100496 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:25.983984  100496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:25.984179  100496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:25.984365  100496 out.go:298] Setting JSON to false
	I0417 18:56:25.984393  100496 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:25.984543  100496 notify.go:220] Checking for updates...
	I0417 18:56:25.984816  100496 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:25.984836  100496 status.go:255] checking status of ha-467706 ...
	I0417 18:56:25.985999  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:25.986078  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.001867  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0417 18:56:26.002318  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.002933  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.002957  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.003387  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.003681  100496 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:26.005564  100496 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:26.005586  100496 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:26.005881  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:26.005930  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.021918  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0417 18:56:26.022336  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.022862  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.022889  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.023202  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.023445  100496 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:26.026366  100496 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:26.026859  100496 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:26.026902  100496 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:26.027040  100496 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:26.027366  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:26.027396  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.042908  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0417 18:56:26.043385  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.043923  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.043945  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.044306  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.044488  100496 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:26.044731  100496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:26.044759  100496 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:26.047799  100496 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:26.048232  100496 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:26.048256  100496 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:26.048501  100496 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:26.048690  100496 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:26.048875  100496 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:26.049010  100496 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:26.129225  100496 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:26.136441  100496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:26.153099  100496 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:26.153142  100496 api_server.go:166] Checking apiserver status ...
	I0417 18:56:26.153189  100496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:26.168449  100496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:26.180426  100496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:26.180496  100496 ssh_runner.go:195] Run: ls
	I0417 18:56:26.185401  100496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:26.191668  100496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:26.191699  100496 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:26.191720  100496 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:26.191743  100496 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:26.192064  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:26.192122  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.208040  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0417 18:56:26.208492  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.209073  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.209103  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.209461  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.209708  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:26.211498  100496 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:26.211518  100496 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:26.211841  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:26.211889  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.228910  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0417 18:56:26.229337  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.229848  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.229892  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.230232  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.230437  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:26.233934  100496 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:26.234378  100496 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:26.234411  100496 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:26.234526  100496 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:26.234849  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:26.234918  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:26.250154  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I0417 18:56:26.250599  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:26.251098  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:26.251122  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:26.251442  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:26.251676  100496 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:26.251900  100496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:26.251923  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:26.254493  100496 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:26.254865  100496 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:26.254897  100496 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:26.255006  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:26.255190  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:26.255343  100496 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:26.255455  100496 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:27.885078  100496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:27.885136  100496 retry.go:31] will retry after 208.959764ms: dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:30.957149  100496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:30.957307  100496 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:30.957335  100496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:30.957342  100496 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:30.957368  100496 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:30.957377  100496 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:30.957683  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:30.957721  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:30.972860  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0417 18:56:30.973343  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:30.973816  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:30.973836  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:30.974234  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:30.974445  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:30.976048  100496 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:30.976065  100496 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:30.976469  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:30.976517  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:30.992861  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0417 18:56:30.993342  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:30.993833  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:30.993855  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:30.994137  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:30.994342  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:30.996748  100496 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:30.997321  100496 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:30.997366  100496 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:30.997434  100496 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:30.997738  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:30.997784  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:31.013304  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37913
	I0417 18:56:31.013800  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:31.014311  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:31.014331  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:31.014641  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:31.014883  100496 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:31.015099  100496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:31.015137  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:31.017828  100496 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:31.018246  100496 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:31.018273  100496 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:31.018442  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:31.018603  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:31.018754  100496 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:31.018892  100496 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:31.100587  100496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:31.118619  100496 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:31.118654  100496 api_server.go:166] Checking apiserver status ...
	I0417 18:56:31.118696  100496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:31.135872  100496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:31.147061  100496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:31.147134  100496 ssh_runner.go:195] Run: ls
	I0417 18:56:31.152347  100496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:31.159055  100496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:31.159084  100496 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:31.159093  100496 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:31.159122  100496 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:31.159490  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:31.159531  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:31.175242  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0417 18:56:31.175833  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:31.176365  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:31.176380  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:31.176658  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:31.176932  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:31.178513  100496 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:31.178530  100496 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:31.178821  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:31.178847  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:31.194943  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0417 18:56:31.195438  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:31.196004  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:31.196026  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:31.196356  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:31.196564  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:31.199502  100496 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:31.199905  100496 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:31.199934  100496 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:31.200114  100496 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:31.200452  100496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:31.200504  100496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:31.216882  100496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I0417 18:56:31.217291  100496 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:31.217864  100496 main.go:141] libmachine: Using API Version  1
	I0417 18:56:31.217889  100496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:31.218200  100496 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:31.218408  100496 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:31.218588  100496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:31.218611  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:31.221062  100496 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:31.221479  100496 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:31.221508  100496 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:31.221699  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:31.221917  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:31.222076  100496 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:31.222297  100496 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:31.304578  100496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:31.320323  100496 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (4.166409031s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:33.495095  100602 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:33.495263  100602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:33.495279  100602 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:33.495298  100602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:33.495513  100602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:33.495721  100602 out.go:298] Setting JSON to false
	I0417 18:56:33.495756  100602 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:33.495883  100602 notify.go:220] Checking for updates...
	I0417 18:56:33.496217  100602 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:33.496235  100602 status.go:255] checking status of ha-467706 ...
	I0417 18:56:33.496709  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.496812  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.517508  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0417 18:56:33.517959  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.518616  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.518652  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.519018  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.519238  100602 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:33.520739  100602 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:33.520760  100602 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:33.521208  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.521287  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.537704  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0417 18:56:33.538282  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.538825  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.538846  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.539176  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.539405  100602 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:33.542392  100602 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:33.542881  100602 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:33.542910  100602 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:33.543044  100602 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:33.543342  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.543385  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.559907  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
	I0417 18:56:33.560509  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.561076  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.561102  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.561446  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.561657  100602 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:33.561875  100602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:33.561915  100602 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:33.564712  100602 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:33.565177  100602 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:33.565214  100602 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:33.565390  100602 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:33.565557  100602 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:33.565750  100602 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:33.565919  100602 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:33.645358  100602 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:33.651692  100602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:33.667754  100602 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:33.667798  100602 api_server.go:166] Checking apiserver status ...
	I0417 18:56:33.667845  100602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:33.682922  100602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:33.693781  100602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:33.693842  100602 ssh_runner.go:195] Run: ls
	I0417 18:56:33.698535  100602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:33.703358  100602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:33.703388  100602 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:33.703401  100602 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:33.703423  100602 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:33.703860  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.703913  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.719399  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0417 18:56:33.719886  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.720588  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.720614  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.721008  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.721226  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:33.722820  100602 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:33.722839  100602 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:33.723182  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.723221  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.739106  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0417 18:56:33.739545  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.739998  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.740029  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.740317  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.740534  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:33.743442  100602 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:33.743861  100602 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:33.743897  100602 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:33.744069  100602 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:33.744382  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:33.744421  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:33.759611  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44907
	I0417 18:56:33.760074  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:33.760634  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:33.760853  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:33.762471  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:33.762696  100602 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:33.762951  100602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:33.762985  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:33.765850  100602 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:33.766270  100602 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:33.766315  100602 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:33.766350  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:33.766543  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:33.766705  100602 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:33.766847  100602 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:34.029000  100602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:34.029066  100602 retry.go:31] will retry after 143.479411ms: dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:37.233171  100602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:37.233306  100602 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:37.233330  100602 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:37.233352  100602 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:37.233388  100602 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:37.233401  100602 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:37.233765  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.233839  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.249402  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0417 18:56:37.249877  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.250381  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.250406  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.250767  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.250977  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:37.252717  100602 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:37.252735  100602 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:37.253052  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.253103  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.268238  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0417 18:56:37.268727  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.269339  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.269370  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.269701  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.269929  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:37.272865  100602 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:37.273340  100602 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:37.273360  100602 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:37.273511  100602 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:37.273794  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.273830  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.290852  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34431
	I0417 18:56:37.291245  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.291717  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.291746  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.292085  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.292271  100602 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:37.292471  100602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:37.292493  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:37.295158  100602 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:37.295567  100602 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:37.295609  100602 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:37.295763  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:37.295950  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:37.296096  100602 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:37.296259  100602 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:37.377143  100602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:37.395599  100602 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:37.395632  100602 api_server.go:166] Checking apiserver status ...
	I0417 18:56:37.395675  100602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:37.414041  100602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:37.427682  100602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:37.427738  100602 ssh_runner.go:195] Run: ls
	I0417 18:56:37.433066  100602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:37.439961  100602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:37.439999  100602 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:37.440012  100602 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:37.440037  100602 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:37.440494  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.440552  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.456148  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0417 18:56:37.456592  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.457130  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.457156  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.457477  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.457662  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:37.459271  100602 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:37.459284  100602 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:37.459559  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.459598  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.475140  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0417 18:56:37.475588  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.476039  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.476062  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.476409  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.476610  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:37.479500  100602 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:37.479941  100602 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:37.479974  100602 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:37.480085  100602 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:37.480397  100602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:37.480441  100602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:37.496368  100602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0417 18:56:37.496925  100602 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:37.497415  100602 main.go:141] libmachine: Using API Version  1
	I0417 18:56:37.497438  100602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:37.497812  100602 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:37.498016  100602 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:37.498199  100602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:37.498225  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:37.501180  100602 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:37.501664  100602 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:37.501689  100602 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:37.501909  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:37.502097  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:37.502230  100602 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:37.502344  100602 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:37.584144  100602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:37.599085  100602 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (4.119121066s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:40.029889  100697 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:40.030042  100697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:40.030054  100697 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:40.030060  100697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:40.030308  100697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:40.030531  100697 out.go:298] Setting JSON to false
	I0417 18:56:40.030562  100697 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:40.030735  100697 notify.go:220] Checking for updates...
	I0417 18:56:40.030938  100697 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:40.030953  100697 status.go:255] checking status of ha-467706 ...
	I0417 18:56:40.031344  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.031407  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.053211  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0417 18:56:40.053721  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.054377  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.054436  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.054790  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.054997  100697 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:40.056690  100697 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:40.056710  100697 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:40.057058  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.057096  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.074520  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0417 18:56:40.075017  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.075512  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.075535  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.075964  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.076251  100697 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:40.079250  100697 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:40.079771  100697 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:40.079797  100697 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:40.079991  100697 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:40.080306  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.080342  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.096344  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0417 18:56:40.096847  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.097391  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.097414  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.097798  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.098050  100697 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:40.098250  100697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:40.098277  100697 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:40.101544  100697 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:40.101963  100697 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:40.101991  100697 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:40.102170  100697 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:40.102337  100697 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:40.102510  100697 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:40.102678  100697 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:40.180696  100697 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:40.187430  100697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:40.202838  100697 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:40.202881  100697 api_server.go:166] Checking apiserver status ...
	I0417 18:56:40.202915  100697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:40.218496  100697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:40.228824  100697 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:40.228904  100697 ssh_runner.go:195] Run: ls
	I0417 18:56:40.234363  100697 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:40.239014  100697 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:40.239046  100697 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:40.239061  100697 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:40.239082  100697 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:40.239499  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.239551  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.255774  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36463
	I0417 18:56:40.256208  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.256701  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.256732  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.257072  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.257264  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:40.258785  100697 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:40.258803  100697 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:40.259624  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.259675  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.276817  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0417 18:56:40.277340  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.277868  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.277899  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.278221  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.278400  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:40.281460  100697 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:40.281842  100697 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:40.281870  100697 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:40.281993  100697 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:40.282310  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:40.282366  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:40.297877  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0417 18:56:40.298317  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:40.298943  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:40.298975  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:40.299348  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:40.299559  100697 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:40.299799  100697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:40.299823  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:40.303067  100697 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:40.303487  100697 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:40.303523  100697 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:40.303738  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:40.303923  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:40.304129  100697 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:40.304353  100697 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:40.304960  100697 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:40.305019  100697 retry.go:31] will retry after 362.070071ms: dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:43.725031  100697 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:43.725129  100697 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:43.725158  100697 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:43.725165  100697 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:43.725190  100697 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:43.725197  100697 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:43.725636  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.725687  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.741261  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I0417 18:56:43.741684  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.742149  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.742175  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.742531  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.742764  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:43.744446  100697 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:43.744471  100697 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:43.744797  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.744847  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.759788  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0417 18:56:43.760315  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.760836  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.760861  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.761223  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.761472  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:43.764317  100697 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:43.764808  100697 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:43.764840  100697 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:43.764963  100697 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:43.765355  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.765394  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.782027  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I0417 18:56:43.782445  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.782920  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.782942  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.783284  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.783469  100697 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:43.783678  100697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:43.783699  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:43.786381  100697 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:43.786849  100697 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:43.786888  100697 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:43.787085  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:43.787275  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:43.787439  100697 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:43.787582  100697 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:43.868821  100697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:43.884490  100697 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:43.884523  100697 api_server.go:166] Checking apiserver status ...
	I0417 18:56:43.884566  100697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:43.899688  100697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:43.917745  100697 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:43.917813  100697 ssh_runner.go:195] Run: ls
	I0417 18:56:43.923048  100697 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:43.927498  100697 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:43.927525  100697 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:43.927534  100697 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:43.927552  100697 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:43.927836  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.927873  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.943347  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I0417 18:56:43.943819  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.944294  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.944318  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.944630  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.944847  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:43.946710  100697 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:43.946739  100697 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:43.947018  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.947054  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.962537  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0417 18:56:43.963051  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.963657  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.963680  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.964030  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.964246  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:43.967669  100697 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:43.968075  100697 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:43.968118  100697 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:43.968298  100697 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:43.968616  100697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:43.968667  100697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:43.985402  100697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0417 18:56:43.985825  100697 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:43.986320  100697 main.go:141] libmachine: Using API Version  1
	I0417 18:56:43.986343  100697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:43.986691  100697 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:43.986883  100697 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:43.987053  100697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:43.987070  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:43.989784  100697 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:43.990186  100697 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:43.990207  100697 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:43.990404  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:43.990589  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:43.990779  100697 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:43.990976  100697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:44.073168  100697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:44.089314  100697 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (4.514418429s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:45.935988  100793 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:45.936265  100793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:45.936275  100793 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:45.936279  100793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:45.936522  100793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:45.936752  100793 out.go:298] Setting JSON to false
	I0417 18:56:45.936818  100793 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:45.936933  100793 notify.go:220] Checking for updates...
	I0417 18:56:45.937297  100793 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:45.937314  100793 status.go:255] checking status of ha-467706 ...
	I0417 18:56:45.937735  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:45.937804  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:45.954203  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0417 18:56:45.954658  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:45.955353  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:45.955382  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:45.955748  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:45.955959  100793 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:45.957837  100793 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:45.957857  100793 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:45.958270  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:45.958318  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:45.973672  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0417 18:56:45.974072  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:45.974563  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:45.974580  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:45.974960  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:45.975208  100793 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:45.978101  100793 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:45.978441  100793 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:45.978467  100793 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:45.978630  100793 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:45.978943  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:45.978999  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:45.994716  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0417 18:56:45.995136  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:45.995589  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:45.995609  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:45.995917  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:45.996128  100793 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:45.996359  100793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:45.996401  100793 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:45.999429  100793 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:45.999890  100793 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:45.999922  100793 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:46.000059  100793 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:46.000261  100793 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:46.000496  100793 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:46.000745  100793 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:46.077093  100793 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:46.083751  100793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:46.103301  100793 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:46.103338  100793 api_server.go:166] Checking apiserver status ...
	I0417 18:56:46.103376  100793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:46.118794  100793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:46.129426  100793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:46.129532  100793 ssh_runner.go:195] Run: ls
	I0417 18:56:46.134874  100793 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:46.139102  100793 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:46.139127  100793 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:46.139136  100793 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:46.139151  100793 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:46.139456  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:46.139507  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:46.155317  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0417 18:56:46.155816  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:46.156398  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:46.156425  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:46.156783  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:46.156970  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:46.158666  100793 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:46.158685  100793 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:46.158994  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:46.159044  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:46.173961  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I0417 18:56:46.174412  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:46.174888  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:46.174914  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:46.175276  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:46.175477  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:46.178406  100793 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:46.178859  100793 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:46.178900  100793 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:46.179186  100793 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:46.179573  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:46.179630  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:46.194962  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0417 18:56:46.195372  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:46.195864  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:46.195891  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:46.196206  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:46.196394  100793 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:46.196618  100793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:46.196640  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:46.199318  100793 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:46.199771  100793 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:46.199802  100793 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:46.199983  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:46.200159  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:46.200340  100793 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:46.200472  100793 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:46.797000  100793 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:46.797062  100793 retry.go:31] will retry after 160.233772ms: dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:50.029037  100793 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:50.029135  100793 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:50.029155  100793 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:50.029189  100793 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:50.029226  100793 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:50.029234  100793 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:50.029682  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.029745  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.044840  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0417 18:56:50.045322  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.045802  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.045832  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.046182  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.046395  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:50.048049  100793 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:50.048065  100793 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:50.048365  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.048418  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.063861  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0417 18:56:50.064348  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.064927  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.064956  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.065346  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.065565  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:50.068592  100793 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:50.069080  100793 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:50.069100  100793 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:50.069257  100793 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:50.069574  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.069637  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.084358  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0417 18:56:50.084870  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.085379  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.085404  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.085756  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.085975  100793 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:50.086173  100793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:50.086198  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:50.089160  100793 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:50.089600  100793 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:50.089625  100793 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:50.089822  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:50.089998  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:50.090170  100793 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:50.090290  100793 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:50.176677  100793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:50.194936  100793 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:50.194966  100793 api_server.go:166] Checking apiserver status ...
	I0417 18:56:50.194995  100793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:50.212070  100793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:50.224648  100793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:50.224706  100793 ssh_runner.go:195] Run: ls
	I0417 18:56:50.229759  100793 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:50.234982  100793 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:50.235013  100793 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:50.235026  100793 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:50.235047  100793 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:50.235387  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.235424  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.250652  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0417 18:56:50.251241  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.251762  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.251797  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.252141  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.252341  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:50.254023  100793 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:50.254045  100793 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:50.254430  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.254471  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.269302  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0417 18:56:50.269713  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.270268  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.270298  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.270694  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.270926  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:50.274161  100793 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:50.274623  100793 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:50.274667  100793 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:50.274779  100793 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:50.275133  100793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:50.275197  100793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:50.290601  100793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I0417 18:56:50.291186  100793 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:50.291690  100793 main.go:141] libmachine: Using API Version  1
	I0417 18:56:50.291720  100793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:50.292064  100793 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:50.292287  100793 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:50.292527  100793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:50.292558  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:50.295551  100793 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:50.295997  100793 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:50.296022  100793 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:50.296174  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:50.296377  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:50.296520  100793 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:50.296681  100793 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:50.376276  100793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:50.391896  100793 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (3.769389394s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:55.313284  100900 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:55.313427  100900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:55.313441  100900 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:55.313447  100900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:55.313706  100900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:56:55.313886  100900 out.go:298] Setting JSON to false
	I0417 18:56:55.313914  100900 mustload.go:65] Loading cluster: ha-467706
	I0417 18:56:55.313954  100900 notify.go:220] Checking for updates...
	I0417 18:56:55.314294  100900 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:55.314311  100900 status.go:255] checking status of ha-467706 ...
	I0417 18:56:55.314798  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.314879  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.335372  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0417 18:56:55.335882  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.336442  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.336477  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.337001  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.337182  100900 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:56:55.338934  100900 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:56:55.338949  100900 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:55.339226  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.339262  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.354965  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0417 18:56:55.355444  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.355995  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.356034  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.356474  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.356730  100900 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:56:55.359792  100900 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:55.360198  100900 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:55.360224  100900 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:55.360376  100900 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:56:55.360664  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.360700  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.375632  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
	I0417 18:56:55.376073  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.376563  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.376587  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.376951  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.377165  100900 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:56:55.377380  100900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:55.377406  100900 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:56:55.380159  100900 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:55.380753  100900 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:56:55.380794  100900 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:56:55.380975  100900 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:56:55.381153  100900 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:56:55.381331  100900 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:56:55.381511  100900 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:56:55.460806  100900 ssh_runner.go:195] Run: systemctl --version
	I0417 18:56:55.467430  100900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:55.483673  100900 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:55.483719  100900 api_server.go:166] Checking apiserver status ...
	I0417 18:56:55.483753  100900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:55.500195  100900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:56:55.513786  100900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:55.513872  100900 ssh_runner.go:195] Run: ls
	I0417 18:56:55.519464  100900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:55.525768  100900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:55.525808  100900 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:56:55.525823  100900 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:55.525850  100900 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:56:55.526179  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.526216  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.541925  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0417 18:56:55.542544  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.543194  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.543233  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.543653  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.543924  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:56:55.545596  100900 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 18:56:55.545615  100900 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:55.545933  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.545980  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.561298  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0417 18:56:55.561796  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.562287  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.562317  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.562635  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.562912  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:56:55.566028  100900 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:55.566465  100900 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:55.566490  100900 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:55.566643  100900 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 18:56:55.566942  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:55.566983  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:55.583580  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I0417 18:56:55.584067  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:55.584515  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:55.584536  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:55.584910  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:55.585098  100900 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:56:55.585271  100900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:55.585288  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:56:55.588000  100900 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:55.588424  100900 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:56:55.588454  100900 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:56:55.588717  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:56:55.588946  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:56:55.589121  100900 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:56:55.589284  100900 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	W0417 18:56:58.669036  100900 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.236:22: connect: no route to host
	W0417 18:56:58.669143  100900 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	E0417 18:56:58.669164  100900 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:58.669173  100900 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0417 18:56:58.669193  100900 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.236:22: connect: no route to host
	I0417 18:56:58.669205  100900 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:56:58.669531  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.669595  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.685005  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0417 18:56:58.685524  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.686079  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.686106  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.686525  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.686742  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:56:58.688858  100900 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:56:58.688877  100900 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:58.689172  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.689208  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.704174  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0417 18:56:58.704669  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.705319  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.705356  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.705774  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.706019  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:56:58.708834  100900 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:58.709326  100900 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:58.709354  100900 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:58.709500  100900 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:56:58.709789  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.709825  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.725546  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0417 18:56:58.725985  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.726455  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.726477  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.726801  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.726987  100900 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:56:58.727211  100900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:58.727232  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:56:58.729878  100900 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:58.730309  100900 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:56:58.730337  100900 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:56:58.730468  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:56:58.730642  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:56:58.730807  100900 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:56:58.730954  100900 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:56:58.812605  100900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:58.827691  100900 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:56:58.827725  100900 api_server.go:166] Checking apiserver status ...
	I0417 18:56:58.827761  100900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:56:58.843407  100900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:56:58.854035  100900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:56:58.854101  100900 ssh_runner.go:195] Run: ls
	I0417 18:56:58.858993  100900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:56:58.863399  100900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:56:58.863423  100900 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:56:58.863432  100900 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:56:58.863449  100900 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:56:58.863773  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.863810  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.879375  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0417 18:56:58.879932  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.880449  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.880475  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.880818  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.881006  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:56:58.882612  100900 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:56:58.882631  100900 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:58.882990  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.883039  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.898779  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
	I0417 18:56:58.899226  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.899798  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.899826  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.900242  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.900429  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:56:58.903195  100900 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:58.903601  100900 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:58.903639  100900 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:58.903776  100900 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:56:58.904083  100900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:56:58.904120  100900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:56:58.919354  100900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0417 18:56:58.919841  100900 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:56:58.920353  100900 main.go:141] libmachine: Using API Version  1
	I0417 18:56:58.920376  100900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:56:58.920709  100900 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:56:58.920915  100900 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:56:58.921148  100900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:56:58.921172  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:56:58.924373  100900 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:58.924962  100900 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:56:58.924992  100900 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:56:58.925176  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:56:58.925383  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:56:58.925642  100900 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:56:58.925836  100900 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:56:59.005333  100900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:56:59.020968  100900 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 7 (669.218243ms)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-467706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:57:10.422162  101037 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:57:10.422333  101037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:57:10.422347  101037 out.go:304] Setting ErrFile to fd 2...
	I0417 18:57:10.422351  101037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:57:10.422564  101037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:57:10.422781  101037 out.go:298] Setting JSON to false
	I0417 18:57:10.422817  101037 mustload.go:65] Loading cluster: ha-467706
	I0417 18:57:10.422890  101037 notify.go:220] Checking for updates...
	I0417 18:57:10.423287  101037 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:57:10.423305  101037 status.go:255] checking status of ha-467706 ...
	I0417 18:57:10.423766  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.423844  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.442967  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0417 18:57:10.443432  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.444032  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.444062  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.444538  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.444800  101037 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:57:10.446488  101037 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 18:57:10.446507  101037 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:57:10.446941  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.446994  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.462312  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35073
	I0417 18:57:10.462760  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.463320  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.463344  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.463693  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.463917  101037 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:57:10.466669  101037 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:57:10.467100  101037 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:57:10.467126  101037 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:57:10.467325  101037 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:57:10.467663  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.467703  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.483063  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0417 18:57:10.483528  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.484021  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.484045  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.484354  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.484563  101037 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:57:10.484791  101037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:57:10.484818  101037 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:57:10.487735  101037 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:57:10.488229  101037 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:57:10.488255  101037 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:57:10.488405  101037 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:57:10.488634  101037 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:57:10.488803  101037 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:57:10.488961  101037 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:57:10.572069  101037 ssh_runner.go:195] Run: systemctl --version
	I0417 18:57:10.580304  101037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:57:10.598244  101037 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:57:10.598287  101037 api_server.go:166] Checking apiserver status ...
	I0417 18:57:10.598345  101037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:57:10.618993  101037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	W0417 18:57:10.630727  101037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:57:10.630792  101037 ssh_runner.go:195] Run: ls
	I0417 18:57:10.636140  101037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:57:10.640727  101037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:57:10.640753  101037 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 18:57:10.640764  101037 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:57:10.640798  101037 status.go:255] checking status of ha-467706-m02 ...
	I0417 18:57:10.641094  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.641136  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.656248  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0417 18:57:10.656698  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.657348  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.657373  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.657699  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.657914  101037 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:57:10.659472  101037 status.go:330] ha-467706-m02 host status = "Stopped" (err=<nil>)
	I0417 18:57:10.659490  101037 status.go:343] host is not running, skipping remaining checks
	I0417 18:57:10.659498  101037 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:57:10.659521  101037 status.go:255] checking status of ha-467706-m03 ...
	I0417 18:57:10.659857  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.659897  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.674696  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I0417 18:57:10.675178  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.677312  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.677343  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.677723  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.677945  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:57:10.679497  101037 status.go:330] ha-467706-m03 host status = "Running" (err=<nil>)
	I0417 18:57:10.679513  101037 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:57:10.679798  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.679843  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.694567  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0417 18:57:10.694975  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.695454  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.695480  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.695870  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.696056  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:57:10.698761  101037 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:10.699293  101037 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:57:10.699321  101037 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:10.699476  101037 host.go:66] Checking if "ha-467706-m03" exists ...
	I0417 18:57:10.699759  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.699793  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.715392  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0417 18:57:10.715862  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.716466  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.716504  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.716872  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.717095  101037 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:57:10.717303  101037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:57:10.717326  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:57:10.720608  101037 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:10.721176  101037 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:57:10.721200  101037 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:10.721260  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:57:10.721468  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:57:10.721633  101037 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:57:10.721855  101037 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:57:10.812110  101037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:57:10.829399  101037 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 18:57:10.829438  101037 api_server.go:166] Checking apiserver status ...
	I0417 18:57:10.829483  101037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:57:10.845128  101037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0417 18:57:10.855764  101037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:57:10.855832  101037 ssh_runner.go:195] Run: ls
	I0417 18:57:10.861069  101037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:57:10.865554  101037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:57:10.865583  101037 status.go:422] ha-467706-m03 apiserver status = Running (err=<nil>)
	I0417 18:57:10.865593  101037 status.go:257] ha-467706-m03 status: &{Name:ha-467706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:57:10.865646  101037 status.go:255] checking status of ha-467706-m04 ...
	I0417 18:57:10.865956  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.866005  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.882520  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0417 18:57:10.882985  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.883522  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.883544  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.883941  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.884130  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:57:10.885814  101037 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 18:57:10.885831  101037 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:57:10.886210  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.886257  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.901669  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0417 18:57:10.902173  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.902732  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.902761  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.903070  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.903244  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 18:57:10.906276  101037 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:10.906593  101037 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:57:10.906645  101037 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:10.906741  101037 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 18:57:10.907055  101037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:10.907096  101037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:10.923525  101037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0417 18:57:10.924068  101037 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:10.924664  101037 main.go:141] libmachine: Using API Version  1
	I0417 18:57:10.924695  101037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:10.925069  101037 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:10.925280  101037 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:57:10.925483  101037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:57:10.925512  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:57:10.929441  101037 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:10.929852  101037 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:57:10.929879  101037 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:10.930037  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:57:10.930236  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:57:10.930469  101037 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:57:10.930610  101037 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:57:11.014207  101037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:57:11.032057  101037 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-467706 -n ha-467706
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-467706 logs -n 25: (1.579508796s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m03_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m04 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp testdata/cp-test.txt                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m04_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03:/home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m03 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-467706 node stop m02 -v=7                                                     | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-467706 node start m02 -v=7                                                    | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 18:49:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 18:49:22.621343   96006 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:49:22.621632   96006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:49:22.621642   96006 out.go:304] Setting ErrFile to fd 2...
	I0417 18:49:22.621647   96006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:49:22.621840   96006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:49:22.622485   96006 out.go:298] Setting JSON to false
	I0417 18:49:22.623337   96006 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9111,"bootTime":1713370652,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:49:22.623412   96006 start.go:139] virtualization: kvm guest
	I0417 18:49:22.625735   96006 out.go:177] * [ha-467706] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:49:22.627418   96006 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:49:22.629062   96006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:49:22.627413   96006 notify.go:220] Checking for updates...
	I0417 18:49:22.630766   96006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:49:22.632309   96006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:22.633783   96006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:49:22.635377   96006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:49:22.636911   96006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:49:22.672838   96006 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 18:49:22.674139   96006 start.go:297] selected driver: kvm2
	I0417 18:49:22.674151   96006 start.go:901] validating driver "kvm2" against <nil>
	I0417 18:49:22.674166   96006 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:49:22.674857   96006 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:49:22.674927   96006 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 18:49:22.690558   96006 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 18:49:22.690619   96006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 18:49:22.690882   96006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:49:22.691013   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:49:22.691030   96006 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0417 18:49:22.691039   96006 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0417 18:49:22.691121   96006 start.go:340] cluster config:
	{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:49:22.691296   96006 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:49:22.693487   96006 out.go:177] * Starting "ha-467706" primary control-plane node in "ha-467706" cluster
	I0417 18:49:22.694987   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:49:22.695039   96006 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 18:49:22.695047   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:49:22.695149   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:49:22.695174   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:49:22.695683   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:49:22.695728   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json: {Name:mk22dabb72a30759b87fd992aca98de3628495f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:22.695916   96006 start.go:360] acquireMachinesLock for ha-467706: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:49:22.695957   96006 start.go:364] duration metric: took 24.804µs to acquireMachinesLock for "ha-467706"
	I0417 18:49:22.695978   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:49:22.696064   96006 start.go:125] createHost starting for "" (driver="kvm2")
	I0417 18:49:22.697982   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:49:22.698141   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:49:22.698192   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:49:22.713521   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42335
	I0417 18:49:22.714905   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:49:22.715477   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:49:22.715503   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:49:22.715863   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:49:22.716093   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:22.716258   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:22.716442   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:49:22.716465   96006 client.go:168] LocalClient.Create starting
	I0417 18:49:22.716497   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:49:22.716531   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:49:22.716546   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:49:22.716601   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:49:22.716619   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:49:22.716649   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:49:22.716665   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:49:22.716674   96006 main.go:141] libmachine: (ha-467706) Calling .PreCreateCheck
	I0417 18:49:22.717100   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:22.717515   96006 main.go:141] libmachine: Creating machine...
	I0417 18:49:22.717530   96006 main.go:141] libmachine: (ha-467706) Calling .Create
	I0417 18:49:22.717669   96006 main.go:141] libmachine: (ha-467706) Creating KVM machine...
	I0417 18:49:22.719151   96006 main.go:141] libmachine: (ha-467706) DBG | found existing default KVM network
	I0417 18:49:22.719847   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:22.719679   96029 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0417 18:49:22.719880   96006 main.go:141] libmachine: (ha-467706) DBG | created network xml: 
	I0417 18:49:22.719893   96006 main.go:141] libmachine: (ha-467706) DBG | <network>
	I0417 18:49:22.719903   96006 main.go:141] libmachine: (ha-467706) DBG |   <name>mk-ha-467706</name>
	I0417 18:49:22.719910   96006 main.go:141] libmachine: (ha-467706) DBG |   <dns enable='no'/>
	I0417 18:49:22.719918   96006 main.go:141] libmachine: (ha-467706) DBG |   
	I0417 18:49:22.719930   96006 main.go:141] libmachine: (ha-467706) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0417 18:49:22.719948   96006 main.go:141] libmachine: (ha-467706) DBG |     <dhcp>
	I0417 18:49:22.719957   96006 main.go:141] libmachine: (ha-467706) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0417 18:49:22.719970   96006 main.go:141] libmachine: (ha-467706) DBG |     </dhcp>
	I0417 18:49:22.719982   96006 main.go:141] libmachine: (ha-467706) DBG |   </ip>
	I0417 18:49:22.719991   96006 main.go:141] libmachine: (ha-467706) DBG |   
	I0417 18:49:22.719999   96006 main.go:141] libmachine: (ha-467706) DBG | </network>
	I0417 18:49:22.720008   96006 main.go:141] libmachine: (ha-467706) DBG | 
	I0417 18:49:22.725332   96006 main.go:141] libmachine: (ha-467706) DBG | trying to create private KVM network mk-ha-467706 192.168.39.0/24...
	I0417 18:49:22.793280   96006 main.go:141] libmachine: (ha-467706) DBG | private KVM network mk-ha-467706 192.168.39.0/24 created
	I0417 18:49:22.793320   96006 main.go:141] libmachine: (ha-467706) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 ...
	I0417 18:49:22.793340   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:22.793255   96029 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:22.793359   96006 main.go:141] libmachine: (ha-467706) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:49:22.793504   96006 main.go:141] libmachine: (ha-467706) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:49:23.035075   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.034912   96029 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa...
	I0417 18:49:23.188428   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.188292   96029 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/ha-467706.rawdisk...
	I0417 18:49:23.188457   96006 main.go:141] libmachine: (ha-467706) DBG | Writing magic tar header
	I0417 18:49:23.188493   96006 main.go:141] libmachine: (ha-467706) DBG | Writing SSH key tar header
	I0417 18:49:23.188535   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:23.188433   96029 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 ...
	I0417 18:49:23.188563   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706
	I0417 18:49:23.188608   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706 (perms=drwx------)
	I0417 18:49:23.188626   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:49:23.188635   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:49:23.188645   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:49:23.188657   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:49:23.188664   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:49:23.188698   96006 main.go:141] libmachine: (ha-467706) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:49:23.188716   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:49:23.188722   96006 main.go:141] libmachine: (ha-467706) Creating domain...
	I0417 18:49:23.188733   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:49:23.188744   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:49:23.188761   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:49:23.188787   96006 main.go:141] libmachine: (ha-467706) DBG | Checking permissions on dir: /home
	I0417 18:49:23.188798   96006 main.go:141] libmachine: (ha-467706) DBG | Skipping /home - not owner
	I0417 18:49:23.190017   96006 main.go:141] libmachine: (ha-467706) define libvirt domain using xml: 
	I0417 18:49:23.190033   96006 main.go:141] libmachine: (ha-467706) <domain type='kvm'>
	I0417 18:49:23.190039   96006 main.go:141] libmachine: (ha-467706)   <name>ha-467706</name>
	I0417 18:49:23.190044   96006 main.go:141] libmachine: (ha-467706)   <memory unit='MiB'>2200</memory>
	I0417 18:49:23.190049   96006 main.go:141] libmachine: (ha-467706)   <vcpu>2</vcpu>
	I0417 18:49:23.190054   96006 main.go:141] libmachine: (ha-467706)   <features>
	I0417 18:49:23.190059   96006 main.go:141] libmachine: (ha-467706)     <acpi/>
	I0417 18:49:23.190067   96006 main.go:141] libmachine: (ha-467706)     <apic/>
	I0417 18:49:23.190072   96006 main.go:141] libmachine: (ha-467706)     <pae/>
	I0417 18:49:23.190086   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190091   96006 main.go:141] libmachine: (ha-467706)   </features>
	I0417 18:49:23.190098   96006 main.go:141] libmachine: (ha-467706)   <cpu mode='host-passthrough'>
	I0417 18:49:23.190103   96006 main.go:141] libmachine: (ha-467706)   
	I0417 18:49:23.190110   96006 main.go:141] libmachine: (ha-467706)   </cpu>
	I0417 18:49:23.190115   96006 main.go:141] libmachine: (ha-467706)   <os>
	I0417 18:49:23.190136   96006 main.go:141] libmachine: (ha-467706)     <type>hvm</type>
	I0417 18:49:23.190145   96006 main.go:141] libmachine: (ha-467706)     <boot dev='cdrom'/>
	I0417 18:49:23.190149   96006 main.go:141] libmachine: (ha-467706)     <boot dev='hd'/>
	I0417 18:49:23.190154   96006 main.go:141] libmachine: (ha-467706)     <bootmenu enable='no'/>
	I0417 18:49:23.190161   96006 main.go:141] libmachine: (ha-467706)   </os>
	I0417 18:49:23.190166   96006 main.go:141] libmachine: (ha-467706)   <devices>
	I0417 18:49:23.190182   96006 main.go:141] libmachine: (ha-467706)     <disk type='file' device='cdrom'>
	I0417 18:49:23.190196   96006 main.go:141] libmachine: (ha-467706)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/boot2docker.iso'/>
	I0417 18:49:23.190209   96006 main.go:141] libmachine: (ha-467706)       <target dev='hdc' bus='scsi'/>
	I0417 18:49:23.190242   96006 main.go:141] libmachine: (ha-467706)       <readonly/>
	I0417 18:49:23.190268   96006 main.go:141] libmachine: (ha-467706)     </disk>
	I0417 18:49:23.190284   96006 main.go:141] libmachine: (ha-467706)     <disk type='file' device='disk'>
	I0417 18:49:23.190297   96006 main.go:141] libmachine: (ha-467706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:49:23.190312   96006 main.go:141] libmachine: (ha-467706)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/ha-467706.rawdisk'/>
	I0417 18:49:23.190324   96006 main.go:141] libmachine: (ha-467706)       <target dev='hda' bus='virtio'/>
	I0417 18:49:23.190351   96006 main.go:141] libmachine: (ha-467706)     </disk>
	I0417 18:49:23.190371   96006 main.go:141] libmachine: (ha-467706)     <interface type='network'>
	I0417 18:49:23.190381   96006 main.go:141] libmachine: (ha-467706)       <source network='mk-ha-467706'/>
	I0417 18:49:23.190392   96006 main.go:141] libmachine: (ha-467706)       <model type='virtio'/>
	I0417 18:49:23.190400   96006 main.go:141] libmachine: (ha-467706)     </interface>
	I0417 18:49:23.190409   96006 main.go:141] libmachine: (ha-467706)     <interface type='network'>
	I0417 18:49:23.190418   96006 main.go:141] libmachine: (ha-467706)       <source network='default'/>
	I0417 18:49:23.190423   96006 main.go:141] libmachine: (ha-467706)       <model type='virtio'/>
	I0417 18:49:23.190430   96006 main.go:141] libmachine: (ha-467706)     </interface>
	I0417 18:49:23.190435   96006 main.go:141] libmachine: (ha-467706)     <serial type='pty'>
	I0417 18:49:23.190444   96006 main.go:141] libmachine: (ha-467706)       <target port='0'/>
	I0417 18:49:23.190450   96006 main.go:141] libmachine: (ha-467706)     </serial>
	I0417 18:49:23.190455   96006 main.go:141] libmachine: (ha-467706)     <console type='pty'>
	I0417 18:49:23.190463   96006 main.go:141] libmachine: (ha-467706)       <target type='serial' port='0'/>
	I0417 18:49:23.190468   96006 main.go:141] libmachine: (ha-467706)     </console>
	I0417 18:49:23.190475   96006 main.go:141] libmachine: (ha-467706)     <rng model='virtio'>
	I0417 18:49:23.190481   96006 main.go:141] libmachine: (ha-467706)       <backend model='random'>/dev/random</backend>
	I0417 18:49:23.190488   96006 main.go:141] libmachine: (ha-467706)     </rng>
	I0417 18:49:23.190493   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190503   96006 main.go:141] libmachine: (ha-467706)     
	I0417 18:49:23.190512   96006 main.go:141] libmachine: (ha-467706)   </devices>
	I0417 18:49:23.190527   96006 main.go:141] libmachine: (ha-467706) </domain>
	I0417 18:49:23.190535   96006 main.go:141] libmachine: (ha-467706) 
	I0417 18:49:23.194869   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:02:89:fa in network default
	I0417 18:49:23.195525   96006 main.go:141] libmachine: (ha-467706) Ensuring networks are active...
	I0417 18:49:23.195544   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:23.196256   96006 main.go:141] libmachine: (ha-467706) Ensuring network default is active
	I0417 18:49:23.196508   96006 main.go:141] libmachine: (ha-467706) Ensuring network mk-ha-467706 is active
	I0417 18:49:23.196986   96006 main.go:141] libmachine: (ha-467706) Getting domain xml...
	I0417 18:49:23.197668   96006 main.go:141] libmachine: (ha-467706) Creating domain...
	I0417 18:49:24.377122   96006 main.go:141] libmachine: (ha-467706) Waiting to get IP...
	I0417 18:49:24.378185   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.378610   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.378673   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.378570   96029 retry.go:31] will retry after 187.650817ms: waiting for machine to come up
	I0417 18:49:24.568112   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.568585   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.568610   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.568519   96029 retry.go:31] will retry after 263.565831ms: waiting for machine to come up
	I0417 18:49:24.834051   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:24.834456   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:24.834493   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:24.834423   96029 retry.go:31] will retry after 431.588458ms: waiting for machine to come up
	I0417 18:49:25.268032   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:25.268496   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:25.268516   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:25.268462   96029 retry.go:31] will retry after 586.894254ms: waiting for machine to come up
	I0417 18:49:25.857433   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:25.857951   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:25.857983   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:25.857879   96029 retry.go:31] will retry after 478.597863ms: waiting for machine to come up
	I0417 18:49:26.337567   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:26.337946   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:26.337971   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:26.337906   96029 retry.go:31] will retry after 722.019817ms: waiting for machine to come up
	I0417 18:49:27.061866   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:27.062146   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:27.062192   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:27.062092   96029 retry.go:31] will retry after 901.648194ms: waiting for machine to come up
	I0417 18:49:27.965748   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:27.966079   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:27.966102   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:27.966048   96029 retry.go:31] will retry after 954.18526ms: waiting for machine to come up
	I0417 18:49:28.921955   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:28.922298   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:28.922330   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:28.922239   96029 retry.go:31] will retry after 1.478334758s: waiting for machine to come up
	I0417 18:49:30.401822   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:30.402348   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:30.402377   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:30.402273   96029 retry.go:31] will retry after 2.24649483s: waiting for machine to come up
	I0417 18:49:32.651659   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:32.652032   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:32.652060   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:32.651979   96029 retry.go:31] will retry after 2.647468116s: waiting for machine to come up
	I0417 18:49:35.302402   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:35.302798   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:35.302829   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:35.302749   96029 retry.go:31] will retry after 2.438483753s: waiting for machine to come up
	I0417 18:49:37.743278   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:37.743704   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:37.743739   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:37.743648   96029 retry.go:31] will retry after 3.206013787s: waiting for machine to come up
	I0417 18:49:40.953078   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:40.953481   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find current IP address of domain ha-467706 in network mk-ha-467706
	I0417 18:49:40.953510   96006 main.go:141] libmachine: (ha-467706) DBG | I0417 18:49:40.953465   96029 retry.go:31] will retry after 4.754103915s: waiting for machine to come up
	I0417 18:49:45.711373   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.711801   96006 main.go:141] libmachine: (ha-467706) Found IP for machine: 192.168.39.159
	I0417 18:49:45.711825   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has current primary IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.711832   96006 main.go:141] libmachine: (ha-467706) Reserving static IP address...
	I0417 18:49:45.712231   96006 main.go:141] libmachine: (ha-467706) DBG | unable to find host DHCP lease matching {name: "ha-467706", mac: "52:54:00:3b:c1:55", ip: "192.168.39.159"} in network mk-ha-467706
	I0417 18:49:45.790706   96006 main.go:141] libmachine: (ha-467706) Reserved static IP address: 192.168.39.159
	I0417 18:49:45.790759   96006 main.go:141] libmachine: (ha-467706) Waiting for SSH to be available...
	I0417 18:49:45.790772   96006 main.go:141] libmachine: (ha-467706) DBG | Getting to WaitForSSH function...
	I0417 18:49:45.793775   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.794200   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:45.794236   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.794395   96006 main.go:141] libmachine: (ha-467706) DBG | Using SSH client type: external
	I0417 18:49:45.794422   96006 main.go:141] libmachine: (ha-467706) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa (-rw-------)
	I0417 18:49:45.794455   96006 main.go:141] libmachine: (ha-467706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:49:45.794471   96006 main.go:141] libmachine: (ha-467706) DBG | About to run SSH command:
	I0417 18:49:45.794484   96006 main.go:141] libmachine: (ha-467706) DBG | exit 0
	I0417 18:49:45.917151   96006 main.go:141] libmachine: (ha-467706) DBG | SSH cmd err, output: <nil>: 
	I0417 18:49:45.917423   96006 main.go:141] libmachine: (ha-467706) KVM machine creation complete!
	I0417 18:49:45.917783   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:45.918347   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:45.918561   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:45.918782   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:49:45.918799   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:49:45.920179   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:49:45.920196   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:49:45.920204   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:49:45.920218   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:45.922787   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.923202   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:45.923230   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:45.923388   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:45.923626   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:45.923791   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:45.923975   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:45.924120   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:45.924416   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:45.924434   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:49:46.020232   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:49:46.020256   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:49:46.020267   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.023222   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.023614   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.023642   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.023856   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.024087   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.024295   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.024474   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.024674   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.024895   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.024909   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:49:46.122008   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:49:46.122077   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:49:46.122084   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:49:46.122093   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.122346   96006 buildroot.go:166] provisioning hostname "ha-467706"
	I0417 18:49:46.122366   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.122583   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.125229   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.125668   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.125696   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.125858   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.126051   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.126223   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.126403   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.126575   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.126745   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.126758   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706 && echo "ha-467706" | sudo tee /etc/hostname
	I0417 18:49:46.241359   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:49:46.241388   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.244126   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.244567   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.244610   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.244867   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.245076   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.245277   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.245428   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.245620   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.245841   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.245860   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:49:46.356754   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:49:46.356807   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:49:46.356900   96006 buildroot.go:174] setting up certificates
	I0417 18:49:46.356915   96006 provision.go:84] configureAuth start
	I0417 18:49:46.356931   96006 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:49:46.357220   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:46.359879   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.360284   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.360311   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.360453   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.362942   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.363300   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.363331   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.363469   96006 provision.go:143] copyHostCerts
	I0417 18:49:46.363499   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:49:46.363558   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:49:46.363584   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:49:46.363675   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:49:46.363791   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:49:46.363813   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:49:46.363818   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:49:46.363848   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:49:46.363901   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:49:46.363917   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:49:46.363923   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:49:46.363943   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:49:46.364003   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706 san=[127.0.0.1 192.168.39.159 ha-467706 localhost minikube]
	I0417 18:49:46.547992   96006 provision.go:177] copyRemoteCerts
	I0417 18:49:46.548058   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:49:46.548085   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.550923   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.551238   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.551272   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.551446   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.551662   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.551812   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.551945   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:46.631629   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:49:46.631706   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:49:46.660510   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:49:46.660601   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0417 18:49:46.686435   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:49:46.686519   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 18:49:46.712855   96006 provision.go:87] duration metric: took 355.924441ms to configureAuth
	I0417 18:49:46.712892   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:49:46.713118   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:49:46.713214   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.715807   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.716194   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.716214   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.716475   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.716658   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.716844   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.717015   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.717222   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:46.717455   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:46.717479   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:49:46.986253   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:49:46.986294   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:49:46.986306   96006 main.go:141] libmachine: (ha-467706) Calling .GetURL
	I0417 18:49:46.987867   96006 main.go:141] libmachine: (ha-467706) DBG | Using libvirt version 6000000
	I0417 18:49:46.990607   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.991025   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.991057   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.991208   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:49:46.991225   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:49:46.991235   96006 client.go:171] duration metric: took 24.274758262s to LocalClient.Create
	I0417 18:49:46.991267   96006 start.go:167] duration metric: took 24.274825568s to libmachine.API.Create "ha-467706"
	I0417 18:49:46.991278   96006 start.go:293] postStartSetup for "ha-467706" (driver="kvm2")
	I0417 18:49:46.991298   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:49:46.991317   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:46.991606   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:49:46.991639   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:46.994027   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.994408   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:46.994434   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:46.994582   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:46.994815   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:46.994988   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:46.995160   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.075556   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:49:47.080124   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:49:47.080157   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:49:47.080240   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:49:47.080378   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:49:47.080395   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:49:47.080509   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:49:47.090584   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:49:47.116825   96006 start.go:296] duration metric: took 125.527222ms for postStartSetup
	I0417 18:49:47.116884   96006 main.go:141] libmachine: (ha-467706) Calling .GetConfigRaw
	I0417 18:49:47.117502   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:47.120183   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.120522   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.120553   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.120862   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:49:47.121043   96006 start.go:128] duration metric: took 24.424966199s to createHost
	I0417 18:49:47.121067   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.123333   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.123641   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.123669   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.123762   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.123939   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.124163   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.124296   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.124473   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:49:47.124691   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:49:47.124711   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:49:47.226058   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379787.196532675
	
	I0417 18:49:47.226085   96006 fix.go:216] guest clock: 1713379787.196532675
	I0417 18:49:47.226096   96006 fix.go:229] Guest: 2024-04-17 18:49:47.196532675 +0000 UTC Remote: 2024-04-17 18:49:47.12105477 +0000 UTC m=+24.548401797 (delta=75.477905ms)
	I0417 18:49:47.226122   96006 fix.go:200] guest clock delta is within tolerance: 75.477905ms
	I0417 18:49:47.226132   96006 start.go:83] releasing machines lock for "ha-467706", held for 24.530164s
	I0417 18:49:47.226159   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.226466   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:47.229254   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.229610   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.229641   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.229826   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230365   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230563   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:49:47.230659   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:49:47.230708   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.230824   96006 ssh_runner.go:195] Run: cat /version.json
	I0417 18:49:47.230857   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:49:47.233295   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.233738   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.233769   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.233789   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.234080   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.234254   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.234319   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:47.234371   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:47.234459   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:49:47.234461   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.234625   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:49:47.234712   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.234758   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:49:47.234859   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:49:47.332201   96006 ssh_runner.go:195] Run: systemctl --version
	I0417 18:49:47.338467   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:49:47.503987   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:49:47.510271   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:49:47.510357   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:49:47.526939   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:49:47.526969   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:49:47.527048   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:49:47.544808   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:49:47.560276   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:49:47.560342   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:49:47.575493   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:49:47.590305   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:49:47.703106   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:49:47.847924   96006 docker.go:233] disabling docker service ...
	I0417 18:49:47.848005   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:49:47.863461   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:49:47.877676   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:49:48.022562   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:49:48.151007   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:49:48.166077   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:49:48.186073   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:49:48.186142   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.197296   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:49:48.197367   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.209670   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.221372   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.233508   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:49:48.245432   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.257212   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.276089   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:49:48.288509   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:49:48.299360   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:49:48.299433   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:49:48.313454   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:49:48.324328   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:49:48.450531   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:49:48.594212   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:49:48.594298   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:49:48.599281   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:49:48.599344   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:49:48.603340   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:49:48.642689   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:49:48.642799   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:49:48.671577   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:49:48.702661   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:49:48.704026   96006 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:49:48.706752   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:48.707106   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:49:48.707135   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:49:48.707492   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:49:48.712013   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:49:48.726914   96006 kubeadm.go:877] updating cluster {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc
.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 18:49:48.727027   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:49:48.727072   96006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 18:49:48.763371   96006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0417 18:49:48.763447   96006 ssh_runner.go:195] Run: which lz4
	I0417 18:49:48.767730   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0417 18:49:48.767859   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0417 18:49:48.772298   96006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0417 18:49:48.772335   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394547972 bytes)
	I0417 18:49:50.300197   96006 crio.go:462] duration metric: took 1.532382815s to copy over tarball
	I0417 18:49:50.300268   96006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0417 18:49:52.542643   96006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.242348458s)
	I0417 18:49:52.542670   96006 crio.go:469] duration metric: took 2.242444327s to extract the tarball
	I0417 18:49:52.542679   96006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 18:49:52.580669   96006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 18:49:52.627109   96006 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 18:49:52.627137   96006 cache_images.go:84] Images are preloaded, skipping loading
	I0417 18:49:52.627146   96006 kubeadm.go:928] updating node { 192.168.39.159 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:49:52.627259   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:49:52.627324   96006 ssh_runner.go:195] Run: crio config
	I0417 18:49:52.672604   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:49:52.672627   96006 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0417 18:49:52.672640   96006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 18:49:52.672667   96006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-467706 NodeName:ha-467706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 18:49:52.672846   96006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-467706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 18:49:52.672877   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:49:52.672919   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:49:52.690262   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:49:52.690373   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:49:52.690446   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:49:52.701509   96006 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 18:49:52.701583   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0417 18:49:52.711831   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0417 18:49:52.729846   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:49:52.747405   96006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0417 18:49:52.764754   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0417 18:49:52.782535   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:49:52.786592   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:49:52.799666   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:49:52.916690   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:49:52.934217   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.159
	I0417 18:49:52.934251   96006 certs.go:194] generating shared ca certs ...
	I0417 18:49:52.934274   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:52.934431   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:49:52.934472   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:49:52.934483   96006 certs.go:256] generating profile certs ...
	I0417 18:49:52.934530   96006 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:49:52.934544   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt with IP's: []
	I0417 18:49:53.244202   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt ...
	I0417 18:49:53.244236   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt: {Name:mk260beaef924a663e20e604d910222418991c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.244413   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key ...
	I0417 18:49:53.244425   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key: {Name:mka6e83ab53b9a78e8580ba26a408c6fe0aa4108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.244500   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce
	I0417 18:49:53.244516   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.254]
	I0417 18:49:53.351159   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce ...
	I0417 18:49:53.351195   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce: {Name:mk0d007a3059514906a38e3c48ad705a629ef9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.351349   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce ...
	I0417 18:49:53.351371   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce: {Name:mkb0a296faba2ad3992fca03f9ce3ee187f67de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.351441   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.7257f6ce -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:49:53.351513   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.7257f6ce -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:49:53.351566   96006 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:49:53.351580   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt with IP's: []
	I0417 18:49:53.481387   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt ...
	I0417 18:49:53.481422   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt: {Name:mk08465b193680de7d272e691e702536866d5179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.481583   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key ...
	I0417 18:49:53.481594   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key: {Name:mkb793bc73b35b3c9b394526c75bb288dee06af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:49:53.481657   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:49:53.481675   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:49:53.481685   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:49:53.481696   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:49:53.481714   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:49:53.481727   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:49:53.481739   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:49:53.481760   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:49:53.481808   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:49:53.481843   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:49:53.481853   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:49:53.481877   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:49:53.481899   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:49:53.481925   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:49:53.481960   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:49:53.481987   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.482001   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.482013   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.482612   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:49:53.509306   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:49:53.537445   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:49:53.565313   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:49:53.592648   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0417 18:49:53.619996   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 18:49:53.649884   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:49:53.679489   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:49:53.718882   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:49:53.756459   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:49:53.791680   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:49:53.818386   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 18:49:53.836408   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:49:53.842201   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:49:53.854077   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.858810   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.858878   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:49:53.864686   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:49:53.876271   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:49:53.888608   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.893582   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.893662   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:49:53.899643   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:49:53.912156   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:49:53.924518   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.929666   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.929743   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:49:53.935925   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:49:53.948824   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:49:53.953491   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:49:53.953551   96006 kubeadm.go:391] StartCluster: {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2
ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:49:53.953647   96006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 18:49:53.953694   96006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 18:49:53.994421   96006 cri.go:89] found id: ""
	I0417 18:49:53.994501   96006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 18:49:54.006024   96006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 18:49:54.017184   96006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 18:49:54.027791   96006 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 18:49:54.027814   96006 kubeadm.go:156] found existing configuration files:
	
	I0417 18:49:54.027869   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 18:49:54.037957   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 18:49:54.038021   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 18:49:54.048431   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 18:49:54.058368   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 18:49:54.058435   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 18:49:54.069360   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 18:49:54.080326   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 18:49:54.080383   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 18:49:54.091346   96006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 18:49:54.101536   96006 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 18:49:54.101611   96006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 18:49:54.112092   96006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 18:49:54.221173   96006 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 18:49:54.221276   96006 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 18:49:54.345642   96006 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 18:49:54.345788   96006 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 18:49:54.345991   96006 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 18:49:54.586516   96006 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 18:49:54.750772   96006 out.go:204]   - Generating certificates and keys ...
	I0417 18:49:54.750922   96006 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 18:49:54.751062   96006 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 18:49:54.751194   96006 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 18:49:54.903367   96006 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 18:49:54.954630   96006 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 18:49:55.127672   96006 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 18:49:55.391718   96006 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 18:49:55.391885   96006 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-467706 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0417 18:49:55.502336   96006 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 18:49:55.502504   96006 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-467706 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0417 18:49:55.698045   96006 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 18:49:55.867661   96006 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 18:49:56.226068   96006 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 18:49:56.226276   96006 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 18:49:56.336532   96006 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 18:49:56.482956   96006 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 18:49:56.554416   96006 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 18:49:56.815239   96006 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 18:49:56.902120   96006 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 18:49:56.902695   96006 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 18:49:56.906066   96006 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 18:49:56.908270   96006 out.go:204]   - Booting up control plane ...
	I0417 18:49:56.908379   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 18:49:56.908500   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 18:49:56.908588   96006 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 18:49:56.925089   96006 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 18:49:56.925875   96006 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 18:49:56.925950   96006 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 18:49:57.077441   96006 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 18:49:57.077590   96006 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 18:49:57.579007   96006 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.961199ms
	I0417 18:49:57.579109   96006 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 18:50:03.508397   96006 kubeadm.go:309] [api-check] The API server is healthy after 5.933545161s
	I0417 18:50:03.524830   96006 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 18:50:03.541796   96006 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 18:50:03.571323   96006 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 18:50:03.571530   96006 kubeadm.go:309] [mark-control-plane] Marking the node ha-467706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 18:50:03.583854   96006 kubeadm.go:309] [bootstrap-token] Using token: hjpiw8.6t8szkhj41h84dis
	I0417 18:50:03.585429   96006 out.go:204]   - Configuring RBAC rules ...
	I0417 18:50:03.585594   96006 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 18:50:03.600099   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 18:50:03.615453   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 18:50:03.619726   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 18:50:03.624618   96006 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 18:50:03.628356   96006 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 18:50:03.917165   96006 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 18:50:04.368237   96006 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 18:50:04.916872   96006 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 18:50:04.918021   96006 kubeadm.go:309] 
	I0417 18:50:04.918113   96006 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 18:50:04.918127   96006 kubeadm.go:309] 
	I0417 18:50:04.918209   96006 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 18:50:04.918220   96006 kubeadm.go:309] 
	I0417 18:50:04.918272   96006 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 18:50:04.918340   96006 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 18:50:04.918418   96006 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 18:50:04.918429   96006 kubeadm.go:309] 
	I0417 18:50:04.918492   96006 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 18:50:04.918511   96006 kubeadm.go:309] 
	I0417 18:50:04.918573   96006 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 18:50:04.918588   96006 kubeadm.go:309] 
	I0417 18:50:04.918691   96006 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 18:50:04.918802   96006 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 18:50:04.918901   96006 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 18:50:04.918912   96006 kubeadm.go:309] 
	I0417 18:50:04.919032   96006 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 18:50:04.919142   96006 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 18:50:04.919153   96006 kubeadm.go:309] 
	I0417 18:50:04.919262   96006 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hjpiw8.6t8szkhj41h84dis \
	I0417 18:50:04.919431   96006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 \
	I0417 18:50:04.919479   96006 kubeadm.go:309] 	--control-plane 
	I0417 18:50:04.919489   96006 kubeadm.go:309] 
	I0417 18:50:04.919599   96006 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 18:50:04.919614   96006 kubeadm.go:309] 
	I0417 18:50:04.919715   96006 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hjpiw8.6t8szkhj41h84dis \
	I0417 18:50:04.919854   96006 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 
	I0417 18:50:04.920711   96006 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 18:50:04.920755   96006 cni.go:84] Creating CNI manager for ""
	I0417 18:50:04.920765   96006 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0417 18:50:04.922745   96006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0417 18:50:04.924119   96006 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0417 18:50:04.932915   96006 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl ...
	I0417 18:50:04.932940   96006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0417 18:50:04.958687   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0417 18:50:05.317320   96006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 18:50:05.317440   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:05.317469   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706 minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=true
	I0417 18:50:05.374278   96006 ops.go:34] apiserver oom_adj: -16
	I0417 18:50:05.507845   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:06.008879   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:06.508139   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:07.008349   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:07.507966   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:08.008041   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:08.508561   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:09.008903   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:09.508146   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:10.007855   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:10.508919   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:11.008695   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:11.508789   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:12.008689   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:12.508392   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:13.008145   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:13.508071   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:14.008218   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:14.508449   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:15.008880   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:15.508603   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:16.008586   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:16.508154   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:17.008458   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0417 18:50:17.152345   96006 kubeadm.go:1107] duration metric: took 11.83496435s to wait for elevateKubeSystemPrivileges
	W0417 18:50:17.152395   96006 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0417 18:50:17.152405   96006 kubeadm.go:393] duration metric: took 23.198862653s to StartCluster
	I0417 18:50:17.152426   96006 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:17.152501   96006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:50:17.153264   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:17.153473   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0417 18:50:17.153481   96006 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:17.153503   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:50:17.153521   96006 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0417 18:50:17.153623   96006 addons.go:69] Setting storage-provisioner=true in profile "ha-467706"
	I0417 18:50:17.153655   96006 addons.go:234] Setting addon storage-provisioner=true in "ha-467706"
	I0417 18:50:17.153681   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:17.153624   96006 addons.go:69] Setting default-storageclass=true in profile "ha-467706"
	I0417 18:50:17.153746   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:17.153771   96006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-467706"
	I0417 18:50:17.154085   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.154124   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.154162   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.154197   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.169960   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0417 18:50:17.170015   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41635
	I0417 18:50:17.170413   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.170442   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.170930   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.170951   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.171058   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.171079   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.171343   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.171539   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.171702   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.171975   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.172010   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.174031   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:50:17.174414   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0417 18:50:17.175002   96006 cert_rotation.go:137] Starting client certificate rotation controller
	I0417 18:50:17.175289   96006 addons.go:234] Setting addon default-storageclass=true in "ha-467706"
	I0417 18:50:17.175344   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:17.175731   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.175781   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.187657   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0417 18:50:17.188199   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.188829   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.188857   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.189272   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.189618   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.190208   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0417 18:50:17.190573   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.191100   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.191124   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.191523   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.191659   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:17.193826   96006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 18:50:17.192154   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:17.195104   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:17.195281   96006 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:50:17.195305   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 18:50:17.195330   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:17.198235   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.198524   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:17.198546   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.198731   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:17.198949   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:17.199120   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:17.199263   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:17.210463   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0417 18:50:17.210875   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:17.211399   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:17.211424   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:17.211846   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:17.212069   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:17.213887   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:17.214200   96006 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 18:50:17.214223   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 18:50:17.214245   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:17.216977   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.217385   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:17.217430   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:17.217612   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:17.217792   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:17.217935   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:17.218076   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:17.317822   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0417 18:50:17.367713   96006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 18:50:17.380094   96006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 18:50:17.756247   96006 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0417 18:50:18.075807   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.075832   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.075894   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.075922   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076153   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076172   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076183   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.076191   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076270   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076290   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076295   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076315   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.076329   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.076399   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076410   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.076440   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076507   96006 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0417 18:50:18.076528   96006 round_trippers.go:469] Request Headers:
	I0417 18:50:18.076539   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:50:18.076543   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:50:18.076684   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.076729   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.076748   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.090116   96006 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0417 18:50:18.091273   96006 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0417 18:50:18.091296   96006 round_trippers.go:469] Request Headers:
	I0417 18:50:18.091315   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:50:18.091320   96006 round_trippers.go:473]     Content-Type: application/json
	I0417 18:50:18.091328   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:50:18.094312   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:50:18.094536   96006 main.go:141] libmachine: Making call to close driver server
	I0417 18:50:18.094557   96006 main.go:141] libmachine: (ha-467706) Calling .Close
	I0417 18:50:18.094830   96006 main.go:141] libmachine: Successfully made call to close driver server
	I0417 18:50:18.094851   96006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 18:50:18.094873   96006 main.go:141] libmachine: (ha-467706) DBG | Closing plugin on server side
	I0417 18:50:18.096565   96006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0417 18:50:18.098207   96006 addons.go:505] duration metric: took 944.687777ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0417 18:50:18.098255   96006 start.go:245] waiting for cluster config update ...
	I0417 18:50:18.098273   96006 start.go:254] writing updated cluster config ...
	I0417 18:50:18.099861   96006 out.go:177] 
	I0417 18:50:18.101568   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:18.101676   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:18.103543   96006 out.go:177] * Starting "ha-467706-m02" control-plane node in "ha-467706" cluster
	I0417 18:50:18.104867   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:50:18.104900   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:50:18.105008   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:50:18.105026   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:50:18.105143   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:18.105367   96006 start.go:360] acquireMachinesLock for ha-467706-m02: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:50:18.105445   96006 start.go:364] duration metric: took 47.307µs to acquireMachinesLock for "ha-467706-m02"
	I0417 18:50:18.105473   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:18.105578   96006 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0417 18:50:18.107330   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:50:18.107431   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:18.107460   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:18.122242   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42417
	I0417 18:50:18.122702   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:18.123187   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:18.123210   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:18.123536   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:18.123731   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:18.123878   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:18.124060   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:50:18.124089   96006 client.go:168] LocalClient.Create starting
	I0417 18:50:18.124126   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:50:18.124174   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:50:18.124193   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:50:18.124264   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:50:18.124291   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:50:18.124571   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:50:18.124658   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:50:18.124674   96006 main.go:141] libmachine: (ha-467706-m02) Calling .PreCreateCheck
	I0417 18:50:18.125072   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:18.126362   96006 main.go:141] libmachine: Creating machine...
	I0417 18:50:18.126386   96006 main.go:141] libmachine: (ha-467706-m02) Calling .Create
	I0417 18:50:18.126923   96006 main.go:141] libmachine: (ha-467706-m02) Creating KVM machine...
	I0417 18:50:18.128081   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found existing default KVM network
	I0417 18:50:18.128226   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found existing private KVM network mk-ha-467706
	I0417 18:50:18.128361   96006 main.go:141] libmachine: (ha-467706-m02) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 ...
	I0417 18:50:18.128389   96006 main.go:141] libmachine: (ha-467706-m02) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:50:18.128482   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.128356   96351 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:50:18.128617   96006 main.go:141] libmachine: (ha-467706-m02) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:50:18.367880   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.367714   96351 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa...
	I0417 18:50:18.531251   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.531111   96351 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/ha-467706-m02.rawdisk...
	I0417 18:50:18.531282   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Writing magic tar header
	I0417 18:50:18.531293   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Writing SSH key tar header
	I0417 18:50:18.531301   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:18.531231   96351 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 ...
	I0417 18:50:18.531380   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02
	I0417 18:50:18.531401   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:50:18.531413   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02 (perms=drwx------)
	I0417 18:50:18.531434   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:50:18.531449   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:50:18.531462   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:50:18.531475   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:50:18.531484   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:50:18.531493   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:50:18.531501   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:50:18.531510   96006 main.go:141] libmachine: (ha-467706-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:50:18.531524   96006 main.go:141] libmachine: (ha-467706-m02) Creating domain...
	I0417 18:50:18.531537   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:50:18.531549   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Checking permissions on dir: /home
	I0417 18:50:18.531561   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Skipping /home - not owner
	I0417 18:50:18.532389   96006 main.go:141] libmachine: (ha-467706-m02) define libvirt domain using xml: 
	I0417 18:50:18.532412   96006 main.go:141] libmachine: (ha-467706-m02) <domain type='kvm'>
	I0417 18:50:18.532440   96006 main.go:141] libmachine: (ha-467706-m02)   <name>ha-467706-m02</name>
	I0417 18:50:18.532464   96006 main.go:141] libmachine: (ha-467706-m02)   <memory unit='MiB'>2200</memory>
	I0417 18:50:18.532474   96006 main.go:141] libmachine: (ha-467706-m02)   <vcpu>2</vcpu>
	I0417 18:50:18.532484   96006 main.go:141] libmachine: (ha-467706-m02)   <features>
	I0417 18:50:18.532495   96006 main.go:141] libmachine: (ha-467706-m02)     <acpi/>
	I0417 18:50:18.532508   96006 main.go:141] libmachine: (ha-467706-m02)     <apic/>
	I0417 18:50:18.532574   96006 main.go:141] libmachine: (ha-467706-m02)     <pae/>
	I0417 18:50:18.532610   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.532633   96006 main.go:141] libmachine: (ha-467706-m02)   </features>
	I0417 18:50:18.532649   96006 main.go:141] libmachine: (ha-467706-m02)   <cpu mode='host-passthrough'>
	I0417 18:50:18.532666   96006 main.go:141] libmachine: (ha-467706-m02)   
	I0417 18:50:18.532678   96006 main.go:141] libmachine: (ha-467706-m02)   </cpu>
	I0417 18:50:18.532690   96006 main.go:141] libmachine: (ha-467706-m02)   <os>
	I0417 18:50:18.532702   96006 main.go:141] libmachine: (ha-467706-m02)     <type>hvm</type>
	I0417 18:50:18.532716   96006 main.go:141] libmachine: (ha-467706-m02)     <boot dev='cdrom'/>
	I0417 18:50:18.532738   96006 main.go:141] libmachine: (ha-467706-m02)     <boot dev='hd'/>
	I0417 18:50:18.532751   96006 main.go:141] libmachine: (ha-467706-m02)     <bootmenu enable='no'/>
	I0417 18:50:18.532766   96006 main.go:141] libmachine: (ha-467706-m02)   </os>
	I0417 18:50:18.532791   96006 main.go:141] libmachine: (ha-467706-m02)   <devices>
	I0417 18:50:18.532809   96006 main.go:141] libmachine: (ha-467706-m02)     <disk type='file' device='cdrom'>
	I0417 18:50:18.532826   96006 main.go:141] libmachine: (ha-467706-m02)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/boot2docker.iso'/>
	I0417 18:50:18.532838   96006 main.go:141] libmachine: (ha-467706-m02)       <target dev='hdc' bus='scsi'/>
	I0417 18:50:18.532850   96006 main.go:141] libmachine: (ha-467706-m02)       <readonly/>
	I0417 18:50:18.532869   96006 main.go:141] libmachine: (ha-467706-m02)     </disk>
	I0417 18:50:18.532879   96006 main.go:141] libmachine: (ha-467706-m02)     <disk type='file' device='disk'>
	I0417 18:50:18.532897   96006 main.go:141] libmachine: (ha-467706-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:50:18.532914   96006 main.go:141] libmachine: (ha-467706-m02)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/ha-467706-m02.rawdisk'/>
	I0417 18:50:18.532926   96006 main.go:141] libmachine: (ha-467706-m02)       <target dev='hda' bus='virtio'/>
	I0417 18:50:18.532938   96006 main.go:141] libmachine: (ha-467706-m02)     </disk>
	I0417 18:50:18.532949   96006 main.go:141] libmachine: (ha-467706-m02)     <interface type='network'>
	I0417 18:50:18.532961   96006 main.go:141] libmachine: (ha-467706-m02)       <source network='mk-ha-467706'/>
	I0417 18:50:18.532975   96006 main.go:141] libmachine: (ha-467706-m02)       <model type='virtio'/>
	I0417 18:50:18.532994   96006 main.go:141] libmachine: (ha-467706-m02)     </interface>
	I0417 18:50:18.533013   96006 main.go:141] libmachine: (ha-467706-m02)     <interface type='network'>
	I0417 18:50:18.533026   96006 main.go:141] libmachine: (ha-467706-m02)       <source network='default'/>
	I0417 18:50:18.533043   96006 main.go:141] libmachine: (ha-467706-m02)       <model type='virtio'/>
	I0417 18:50:18.533056   96006 main.go:141] libmachine: (ha-467706-m02)     </interface>
	I0417 18:50:18.533067   96006 main.go:141] libmachine: (ha-467706-m02)     <serial type='pty'>
	I0417 18:50:18.533079   96006 main.go:141] libmachine: (ha-467706-m02)       <target port='0'/>
	I0417 18:50:18.533089   96006 main.go:141] libmachine: (ha-467706-m02)     </serial>
	I0417 18:50:18.533105   96006 main.go:141] libmachine: (ha-467706-m02)     <console type='pty'>
	I0417 18:50:18.533116   96006 main.go:141] libmachine: (ha-467706-m02)       <target type='serial' port='0'/>
	I0417 18:50:18.533138   96006 main.go:141] libmachine: (ha-467706-m02)     </console>
	I0417 18:50:18.533161   96006 main.go:141] libmachine: (ha-467706-m02)     <rng model='virtio'>
	I0417 18:50:18.533179   96006 main.go:141] libmachine: (ha-467706-m02)       <backend model='random'>/dev/random</backend>
	I0417 18:50:18.533191   96006 main.go:141] libmachine: (ha-467706-m02)     </rng>
	I0417 18:50:18.533202   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.533212   96006 main.go:141] libmachine: (ha-467706-m02)     
	I0417 18:50:18.533224   96006 main.go:141] libmachine: (ha-467706-m02)   </devices>
	I0417 18:50:18.533238   96006 main.go:141] libmachine: (ha-467706-m02) </domain>
	I0417 18:50:18.533255   96006 main.go:141] libmachine: (ha-467706-m02) 
	I0417 18:50:18.540094   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:22:42:74 in network default
	I0417 18:50:18.540741   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring networks are active...
	I0417 18:50:18.540791   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:18.541484   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring network default is active
	I0417 18:50:18.541781   96006 main.go:141] libmachine: (ha-467706-m02) Ensuring network mk-ha-467706 is active
	I0417 18:50:18.542114   96006 main.go:141] libmachine: (ha-467706-m02) Getting domain xml...
	I0417 18:50:18.542779   96006 main.go:141] libmachine: (ha-467706-m02) Creating domain...
	I0417 18:50:19.758274   96006 main.go:141] libmachine: (ha-467706-m02) Waiting to get IP...
	I0417 18:50:19.759295   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:19.759776   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:19.759831   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:19.759753   96351 retry.go:31] will retry after 297.823603ms: waiting for machine to come up
	I0417 18:50:20.059605   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.060162   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.060192   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.060099   96351 retry.go:31] will retry after 378.130105ms: waiting for machine to come up
	I0417 18:50:20.439850   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.440396   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.440423   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.440349   96351 retry.go:31] will retry after 309.850338ms: waiting for machine to come up
	I0417 18:50:20.751969   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:20.752504   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:20.752529   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:20.752447   96351 retry.go:31] will retry after 484.021081ms: waiting for machine to come up
	I0417 18:50:21.238166   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:21.238627   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:21.238653   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:21.238571   96351 retry.go:31] will retry after 723.470091ms: waiting for machine to come up
	I0417 18:50:21.963754   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:21.964274   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:21.964316   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:21.964185   96351 retry.go:31] will retry after 820.645081ms: waiting for machine to come up
	I0417 18:50:22.786393   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:22.786856   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:22.786885   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:22.786809   96351 retry.go:31] will retry after 997.774765ms: waiting for machine to come up
	I0417 18:50:23.786284   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:23.786664   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:23.786685   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:23.786639   96351 retry.go:31] will retry after 1.38947065s: waiting for machine to come up
	I0417 18:50:25.177959   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:25.178412   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:25.178445   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:25.178346   96351 retry.go:31] will retry after 1.352777892s: waiting for machine to come up
	I0417 18:50:26.532959   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:26.533453   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:26.533485   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:26.533400   96351 retry.go:31] will retry after 2.218994741s: waiting for machine to come up
	I0417 18:50:28.754002   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:28.754519   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:28.754555   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:28.754456   96351 retry.go:31] will retry after 1.815056829s: waiting for machine to come up
	I0417 18:50:30.572601   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:30.573175   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:30.573208   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:30.573099   96351 retry.go:31] will retry after 2.735191697s: waiting for machine to come up
	I0417 18:50:33.309522   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:33.309997   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:33.310028   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:33.309953   96351 retry.go:31] will retry after 3.13218678s: waiting for machine to come up
	I0417 18:50:36.446318   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:36.446793   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find current IP address of domain ha-467706-m02 in network mk-ha-467706
	I0417 18:50:36.446824   96006 main.go:141] libmachine: (ha-467706-m02) DBG | I0417 18:50:36.446734   96351 retry.go:31] will retry after 5.302006713s: waiting for machine to come up
	I0417 18:50:41.753177   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.753633   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has current primary IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.753665   96006 main.go:141] libmachine: (ha-467706-m02) Found IP for machine: 192.168.39.236
	I0417 18:50:41.753703   96006 main.go:141] libmachine: (ha-467706-m02) Reserving static IP address...
	I0417 18:50:41.754045   96006 main.go:141] libmachine: (ha-467706-m02) DBG | unable to find host DHCP lease matching {name: "ha-467706-m02", mac: "52:54:00:d8:50:50", ip: "192.168.39.236"} in network mk-ha-467706
	I0417 18:50:41.829586   96006 main.go:141] libmachine: (ha-467706-m02) Reserved static IP address: 192.168.39.236
	I0417 18:50:41.829617   96006 main.go:141] libmachine: (ha-467706-m02) Waiting for SSH to be available...
	I0417 18:50:41.829627   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Getting to WaitForSSH function...
	I0417 18:50:41.832895   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.833340   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:41.833363   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.833541   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using SSH client type: external
	I0417 18:50:41.833571   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa (-rw-------)
	I0417 18:50:41.833601   96006 main.go:141] libmachine: (ha-467706-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:50:41.833614   96006 main.go:141] libmachine: (ha-467706-m02) DBG | About to run SSH command:
	I0417 18:50:41.833630   96006 main.go:141] libmachine: (ha-467706-m02) DBG | exit 0
	I0417 18:50:41.960706   96006 main.go:141] libmachine: (ha-467706-m02) DBG | SSH cmd err, output: <nil>: 
	I0417 18:50:41.960994   96006 main.go:141] libmachine: (ha-467706-m02) KVM machine creation complete!
	I0417 18:50:41.961334   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:41.961844   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:41.962061   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:41.962225   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:50:41.962238   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 18:50:41.963436   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:50:41.963451   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:50:41.963456   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:50:41.963463   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:41.965691   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.965995   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:41.966028   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:41.966103   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:41.966297   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:41.966455   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:41.966606   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:41.966798   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:41.966995   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:41.967007   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:50:42.072210   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:50:42.072233   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:50:42.072241   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.075072   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.075435   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.075465   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.075768   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.075992   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.076161   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.076307   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.076475   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.076688   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.076702   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:50:42.185913   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:50:42.185995   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:50:42.186002   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:50:42.186011   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.186288   96006 buildroot.go:166] provisioning hostname "ha-467706-m02"
	I0417 18:50:42.186321   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.186569   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.189252   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.189677   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.189709   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.189846   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.190047   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.190213   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.190368   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.190523   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.190728   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.190744   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706-m02 && echo "ha-467706-m02" | sudo tee /etc/hostname
	I0417 18:50:42.311838   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706-m02
	
	I0417 18:50:42.311871   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.314488   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.314887   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.314914   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.315085   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.315336   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.315547   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.315701   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.315908   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.316085   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.316102   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:50:42.434077   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:50:42.434113   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:50:42.434135   96006 buildroot.go:174] setting up certificates
	I0417 18:50:42.434148   96006 provision.go:84] configureAuth start
	I0417 18:50:42.434166   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetMachineName
	I0417 18:50:42.434487   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:42.437258   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.437702   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.437734   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.437882   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.440159   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.440448   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.440490   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.440578   96006 provision.go:143] copyHostCerts
	I0417 18:50:42.440617   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:50:42.440657   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:50:42.440669   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:50:42.440748   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:50:42.440873   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:50:42.440901   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:50:42.440909   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:50:42.440952   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:50:42.441025   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:50:42.441048   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:50:42.441054   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:50:42.441088   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:50:42.441162   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706-m02 san=[127.0.0.1 192.168.39.236 ha-467706-m02 localhost minikube]
	I0417 18:50:42.760848   96006 provision.go:177] copyRemoteCerts
	I0417 18:50:42.760909   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:50:42.760938   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.763523   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.763809   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.763835   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.763992   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.764230   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.764378   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.764519   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:42.847766   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:50:42.847833   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:50:42.873757   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:50:42.873849   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 18:50:42.899584   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:50:42.899649   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:50:42.925517   96006 provision.go:87] duration metric: took 491.346719ms to configureAuth
	I0417 18:50:42.925550   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:50:42.925744   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:42.925844   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:42.928428   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.928795   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:42.928848   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:42.929039   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:42.929254   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.929439   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:42.929593   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:42.929783   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:42.929940   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:42.929955   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:50:43.215765   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:50:43.215798   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:50:43.215807   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetURL
	I0417 18:50:43.217123   96006 main.go:141] libmachine: (ha-467706-m02) DBG | Using libvirt version 6000000
	I0417 18:50:43.219595   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.220034   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.220066   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.220248   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:50:43.220262   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:50:43.220270   96006 client.go:171] duration metric: took 25.096172136s to LocalClient.Create
	I0417 18:50:43.220292   96006 start.go:167] duration metric: took 25.096248433s to libmachine.API.Create "ha-467706"
	I0417 18:50:43.220302   96006 start.go:293] postStartSetup for "ha-467706-m02" (driver="kvm2")
	I0417 18:50:43.220314   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:50:43.220347   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.220618   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:50:43.220650   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.222911   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.223248   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.223279   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.223454   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.223636   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.223808   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.223996   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.309604   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:50:43.314300   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:50:43.314329   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:50:43.314402   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:50:43.314489   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:50:43.314503   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:50:43.314613   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:50:43.324876   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:50:43.352537   96006 start.go:296] duration metric: took 132.222885ms for postStartSetup
	I0417 18:50:43.352594   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetConfigRaw
	I0417 18:50:43.353271   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:43.355939   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.356297   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.356327   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.356586   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:50:43.356837   96006 start.go:128] duration metric: took 25.251245741s to createHost
	I0417 18:50:43.356869   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.358910   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.359264   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.359292   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.359361   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.359556   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.359731   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.359873   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.360045   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:50:43.360216   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I0417 18:50:43.360227   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:50:43.469667   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379843.419472324
	
	I0417 18:50:43.469695   96006 fix.go:216] guest clock: 1713379843.419472324
	I0417 18:50:43.469704   96006 fix.go:229] Guest: 2024-04-17 18:50:43.419472324 +0000 UTC Remote: 2024-04-17 18:50:43.356854721 +0000 UTC m=+80.784201764 (delta=62.617603ms)
	I0417 18:50:43.469725   96006 fix.go:200] guest clock delta is within tolerance: 62.617603ms
	I0417 18:50:43.469732   96006 start.go:83] releasing machines lock for "ha-467706-m02", held for 25.364268885s
	I0417 18:50:43.469750   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.470040   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:43.472586   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.473021   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.473047   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.475263   96006 out.go:177] * Found network options:
	I0417 18:50:43.476714   96006 out.go:177]   - NO_PROXY=192.168.39.159
	W0417 18:50:43.477965   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:50:43.478009   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478640   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478829   96006 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 18:50:43.478917   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:50:43.478957   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	W0417 18:50:43.479048   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:50:43.479126   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:50:43.479148   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 18:50:43.481734   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.481965   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482051   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.482106   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482197   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.482332   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:43.482357   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:43.482366   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.482509   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 18:50:43.482559   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.482676   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 18:50:43.482729   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.482839   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 18:50:43.482978   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 18:50:43.723186   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:50:43.729400   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:50:43.729487   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:50:43.747191   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:50:43.747222   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:50:43.747298   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:50:43.766134   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:50:43.782049   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:50:43.782103   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:50:43.796842   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:50:43.811837   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:50:43.955183   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:50:44.119405   96006 docker.go:233] disabling docker service ...
	I0417 18:50:44.119488   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:50:44.135751   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:50:44.150314   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:50:44.289369   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:50:44.419335   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:50:44.434448   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:50:44.454265   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:50:44.454341   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.465548   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:50:44.465626   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.477204   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.488510   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.500218   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:50:44.511859   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.523479   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.544983   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:50:44.556679   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:50:44.567217   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:50:44.567280   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:50:44.581851   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:50:44.592445   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:50:44.730545   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:50:44.871219   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:50:44.871302   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:50:44.876072   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:50:44.876152   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:50:44.880114   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:50:44.920173   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:50:44.920278   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:50:44.949069   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:50:44.987468   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:50:44.988990   96006 out.go:177]   - env NO_PROXY=192.168.39.159
	I0417 18:50:44.990345   96006 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 18:50:44.992870   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:44.993230   96006 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:50:33 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 18:50:44.993253   96006 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 18:50:44.993438   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:50:44.998037   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:50:45.011925   96006 mustload.go:65] Loading cluster: ha-467706
	I0417 18:50:45.012122   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:50:45.012364   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:45.012402   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:45.027180   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0417 18:50:45.027710   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:45.028266   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:45.028293   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:45.028637   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:45.029263   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:50:45.031035   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:45.031317   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:45.031355   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:45.046157   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35907
	I0417 18:50:45.046558   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:45.047075   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:45.047105   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:45.047444   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:45.047639   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:45.047800   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.236
	I0417 18:50:45.047815   96006 certs.go:194] generating shared ca certs ...
	I0417 18:50:45.047834   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.047966   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:50:45.048005   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:50:45.048014   96006 certs.go:256] generating profile certs ...
	I0417 18:50:45.048106   96006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:50:45.048131   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3
	I0417 18:50:45.048146   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.254]
	I0417 18:50:45.216050   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 ...
	I0417 18:50:45.216082   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3: {Name:mke40eb0bbfa4a257d69dee7c0db8615a28a2c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.216247   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3 ...
	I0417 18:50:45.216267   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3: {Name:mk744f487b85aae0492308fe90f1def1e1057446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:50:45.216334   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.4be75ba3 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:50:45.216463   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.4be75ba3 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:50:45.216592   96006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:50:45.216609   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:50:45.216622   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:50:45.216636   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:50:45.216647   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:50:45.216658   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:50:45.216668   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:50:45.216682   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:50:45.216694   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:50:45.216742   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:50:45.216789   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:50:45.216800   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:50:45.216830   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:50:45.216853   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:50:45.216875   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:50:45.216912   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:50:45.216937   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.216951   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.216966   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.217028   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:45.220190   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:45.220566   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:45.220597   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:45.220791   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:45.221049   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:45.221236   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:45.221414   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:45.293203   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0417 18:50:45.298477   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0417 18:50:45.313225   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0417 18:50:45.318150   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0417 18:50:45.332489   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0417 18:50:45.340783   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0417 18:50:45.351476   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0417 18:50:45.355935   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0417 18:50:45.368043   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0417 18:50:45.372901   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0417 18:50:45.385103   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0417 18:50:45.390349   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0417 18:50:45.403053   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:50:45.428841   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:50:45.453137   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:50:45.478112   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:50:45.503276   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0417 18:50:45.528841   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 18:50:45.554403   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:50:45.581253   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:50:45.609231   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:50:45.635402   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:50:45.661160   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:50:45.687529   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0417 18:50:45.704763   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0417 18:50:45.721300   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0417 18:50:45.738377   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0417 18:50:45.755487   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0417 18:50:45.772301   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0417 18:50:45.789375   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0417 18:50:45.807294   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:50:45.813499   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:50:45.825040   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.830049   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.830110   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:50:45.836295   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:50:45.848368   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:50:45.859950   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.865632   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.865700   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:50:45.871771   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:50:45.883399   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:50:45.894718   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.900008   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.900078   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:50:45.906155   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:50:45.917743   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:50:45.922549   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:50:45.922604   96006 kubeadm.go:928] updating node {m02 192.168.39.236 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:50:45.922716   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:50:45.922752   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:50:45.922800   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:50:45.943173   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:50:45.943265   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:50:45.943322   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:50:45.956026   96006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0-rc.2': No such file or directory
	
	Initiating transfer...
	I0417 18:50:45.956087   96006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:50:45.967843   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0417 18:50:45.967881   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:50:45.967960   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:50:45.967964   96006 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet
	I0417 18:50:45.967984   96006 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm
	I0417 18:50:45.972622   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubectl': No such file or directory
	I0417 18:50:45.972648   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl (51454104 bytes)
	I0417 18:50:48.202574   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:50:48.202664   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:50:48.208132   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm': No such file or directory
	I0417 18:50:48.208171   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm (50249880 bytes)
	I0417 18:50:50.185607   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:50:50.201809   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:50:50.201893   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:50:50.207126   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet': No such file or directory
	I0417 18:50:50.207170   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet (100100024 bytes)
	I0417 18:50:50.649074   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0417 18:50:50.659340   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0417 18:50:50.678040   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:50:50.697112   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 18:50:50.715650   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:50:50.720043   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:50:50.733632   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:50:50.869987   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:50:50.888165   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:50:50.888697   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:50:50.888785   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:50:50.904309   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0417 18:50:50.904752   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:50:50.905310   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:50:50.905332   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:50:50.905622   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:50:50.905847   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:50:50.906027   96006 start.go:316] joinCluster: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 Cl
usterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:50:50.906152   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0417 18:50:50.906180   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:50:50.909437   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:50.909900   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:50:50.909928   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:50:50.910110   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:50:50.910342   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:50:50.910526   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:50:50.910722   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:50:51.055851   96006 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:50:51.055919   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt3ppm.k6jpgcx1jeyj2jt1 --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m02 --control-plane --apiserver-advertise-address=192.168.39.236 --apiserver-bind-port=8443"
	I0417 18:51:13.972353   96006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt3ppm.k6jpgcx1jeyj2jt1 --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m02 --control-plane --apiserver-advertise-address=192.168.39.236 --apiserver-bind-port=8443": (22.916397784s)
	I0417 18:51:13.972395   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0417 18:51:14.574803   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706-m02 minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=false
	I0417 18:51:14.717411   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-467706-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0417 18:51:14.893231   96006 start.go:318] duration metric: took 23.987209952s to joinCluster
	I0417 18:51:14.893330   96006 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:51:14.895174   96006 out.go:177] * Verifying Kubernetes components...
	I0417 18:51:14.893626   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:14.896844   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:51:15.183083   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:51:15.224796   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:51:15.225073   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0417 18:51:15.225173   96006 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.159:8443
	I0417 18:51:15.225692   96006 node_ready.go:35] waiting up to 6m0s for node "ha-467706-m02" to be "Ready" ...
	I0417 18:51:15.225869   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:15.225880   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:15.225892   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:15.225896   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:15.236705   96006 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0417 18:51:15.726883   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:15.726910   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:15.726922   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:15.726928   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:15.735017   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:16.226972   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:16.227006   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:16.227019   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:16.227024   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:16.231152   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:16.726335   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:16.726359   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:16.726368   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:16.726371   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:16.730277   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:17.226638   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:17.226669   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:17.226678   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:17.226681   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:17.230662   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:17.231373   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:17.726905   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:17.726929   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:17.726938   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:17.726941   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:17.731821   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:18.226785   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:18.226823   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:18.226835   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:18.226841   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:18.231456   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:18.726414   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:18.726439   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:18.726448   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:18.726451   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:18.742533   96006 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0417 18:51:19.225974   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:19.225999   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:19.226009   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:19.226014   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:19.231635   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:19.233042   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:19.726389   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:19.726411   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:19.726420   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:19.726425   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:19.729897   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:20.225875   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:20.225899   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:20.225907   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:20.225911   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:20.229100   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:20.726901   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:20.726924   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:20.726932   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:20.726936   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:20.730417   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:21.226005   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:21.226091   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:21.226109   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:21.226117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:21.230666   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:21.726785   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:21.726806   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:21.726813   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:21.726818   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:21.731509   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:21.732328   96006 node_ready.go:53] node "ha-467706-m02" has status "Ready":"False"
	I0417 18:51:22.226721   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:22.226745   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:22.226756   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:22.226761   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:22.231754   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:22.726964   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:22.726997   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:22.727010   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:22.727018   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:22.735454   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:23.226383   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.226413   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.226428   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.226437   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.229878   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.230663   96006 node_ready.go:49] node "ha-467706-m02" has status "Ready":"True"
	I0417 18:51:23.230683   96006 node_ready.go:38] duration metric: took 8.0049708s for node "ha-467706-m02" to be "Ready" ...
	I0417 18:51:23.230694   96006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:51:23.230814   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:23.230827   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.230835   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.230838   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.239654   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:23.246313   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.246432   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-56dz8
	I0417 18:51:23.246443   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.246451   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.246454   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.249987   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.251471   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.251491   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.251498   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.251503   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.254142   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.254681   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.254699   96006 pod_ready.go:81] duration metric: took 8.360266ms for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.254707   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.254764   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kcdqn
	I0417 18:51:23.254773   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.254780   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.254784   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.257241   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.257923   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.257937   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.257944   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.257947   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.260197   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.260616   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.260631   96006 pod_ready.go:81] duration metric: took 5.918629ms for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.260640   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.260696   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706
	I0417 18:51:23.260705   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.260712   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.260723   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.263098   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.263593   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:23.263606   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.263612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.263616   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.266189   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.267242   96006 pod_ready.go:92] pod "etcd-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:23.267256   96006 pod_ready.go:81] duration metric: took 6.610637ms for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.267265   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:23.267312   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:23.267322   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.267328   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.267331   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.269674   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.270282   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.270294   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.270301   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.270304   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.272657   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:23.767707   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:23.767738   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.767746   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.767751   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.770927   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:23.771848   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:23.771867   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:23.771877   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:23.771882   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:23.774590   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:24.267451   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:24.267480   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.267493   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.267498   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.271213   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.272086   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:24.272102   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.272109   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.272114   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.275365   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.768469   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:24.768502   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.768514   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.768519   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.772128   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:24.773012   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:24.773028   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:24.773037   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:24.773041   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:24.776084   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.268038   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:25.268060   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.268068   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.268073   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.271535   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.272213   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:25.272231   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.272239   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.272244   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.275254   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:25.275716   96006 pod_ready.go:102] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:25.768203   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:25.768229   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.768239   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.768245   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.771806   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:25.772566   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:25.772611   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:25.772619   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:25.772625   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:25.775636   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:26.268063   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:26.268099   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.268124   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.268129   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.272010   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:26.272908   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:26.272926   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.272937   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.272942   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.275754   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:26.767796   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:26.767819   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.767827   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.767831   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.771440   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:26.772137   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:26.772157   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:26.772168   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:26.772174   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:26.775107   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.267560   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:51:27.267584   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.267592   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.267597   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.271553   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:27.272200   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.272217   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.272226   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.272229   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.274967   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.275841   96006 pod_ready.go:92] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:27.275858   96006 pod_ready.go:81] duration metric: took 4.008587704s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.275872   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.275926   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706
	I0417 18:51:27.275934   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.275941   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.275945   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.278731   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.279548   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:27.279562   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.279569   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.279572   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.282012   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.282523   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:27.282541   96006 pod_ready.go:81] duration metric: took 6.66288ms for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.282549   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:27.282597   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:27.282605   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.282612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.282616   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.285219   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.285913   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.285925   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.285932   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.285936   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.288353   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:27.782927   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:27.782951   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.782962   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.782966   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.787431   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:27.788870   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:27.788887   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:27.788895   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:27.788900   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:27.792215   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.283697   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:28.283728   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.283737   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.283741   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.287291   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.288522   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:28.288535   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.288545   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.288548   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.291872   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:28.782825   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:28.782848   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.782857   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.782862   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.787118   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:28.787874   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:28.787889   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:28.787897   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:28.787902   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:28.791077   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:29.283249   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:29.283280   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.283292   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.283297   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.288519   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:29.289409   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:29.289428   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.289440   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.289446   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.292965   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:29.293591   96006 pod_ready.go:102] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:29.782904   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:29.782929   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.782940   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.782944   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.787500   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:29.789301   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:29.789322   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:29.789333   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:29.789339   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:29.793831   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:30.283438   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:30.283462   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.283470   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.283474   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.286931   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:30.287797   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:30.287812   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.287820   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.287825   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.290475   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:30.783555   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:30.783578   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.783586   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.783591   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.787803   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:30.788747   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:30.788767   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:30.788804   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:30.788811   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:30.792158   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.283747   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:31.283777   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.283790   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.283795   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.288177   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:31.289081   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:31.289095   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.289103   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.289107   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.292276   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.783370   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:31.783402   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.783414   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.783420   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.786680   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:31.787486   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:31.787503   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:31.787510   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:31.787514   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:31.790453   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:31.790996   96006 pod_ready.go:102] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"False"
	I0417 18:51:32.283343   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:51:32.283366   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.283375   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.283380   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.287390   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.289101   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.289117   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.289129   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.289137   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.297645   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:51:32.298808   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.298825   96006 pod_ready.go:81] duration metric: took 5.016269549s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.298836   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.298896   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:51:32.298903   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.298911   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.298918   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.303341   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:32.304646   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.304662   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.304670   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.304675   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.307481   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.307982   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.308001   96006 pod_ready.go:81] duration metric: took 9.157988ms for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.308011   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.308072   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:51:32.308080   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.308087   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.308090   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.310622   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.311286   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.311300   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.311306   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.311309   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.314424   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.315460   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.315477   96006 pod_ready.go:81] duration metric: took 7.460114ms for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.315486   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.315539   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:51:32.315547   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.315554   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.315562   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.318568   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:51:32.426747   96006 request.go:629] Waited for 107.278257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.426836   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:32.426844   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.426855   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.426865   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.430214   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.430962   96006 pod_ready.go:92] pod "kube-proxy-hd469" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.430982   96006 pod_ready.go:81] duration metric: took 115.490294ms for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.430993   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.626388   96006 request.go:629] Waited for 195.326382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:51:32.626452   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:51:32.626458   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.626466   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.626478   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.630307   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:32.827284   96006 request.go:629] Waited for 196.366908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.827344   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:32.827349   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:32.827357   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:32.827364   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:32.831913   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:32.832435   96006 pod_ready.go:92] pod "kube-proxy-qxtf4" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:32.832455   96006 pod_ready.go:81] duration metric: took 401.454584ms for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:32.832469   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.026738   96006 request.go:629] Waited for 194.18784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:51:33.026803   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:51:33.026810   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.026821   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.026837   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.030357   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.226664   96006 request.go:629] Waited for 195.467878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:33.226745   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:51:33.226754   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.226763   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.226766   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.230848   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:51:33.231696   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:33.231721   96006 pod_ready.go:81] duration metric: took 399.24464ms for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.231736   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.426832   96006 request.go:629] Waited for 194.99926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:51:33.426911   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:51:33.426918   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.426927   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.426938   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.430894   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.626958   96006 request.go:629] Waited for 195.422824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:33.627025   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:51:33.627030   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.627038   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.627041   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.631026   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:33.631774   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:51:33.631795   96006 pod_ready.go:81] duration metric: took 400.050921ms for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:51:33.631806   96006 pod_ready.go:38] duration metric: took 10.401080338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:51:33.631822   96006 api_server.go:52] waiting for apiserver process to appear ...
	I0417 18:51:33.631879   96006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:51:33.648554   96006 api_server.go:72] duration metric: took 18.75517691s to wait for apiserver process to appear ...
	I0417 18:51:33.648600   96006 api_server.go:88] waiting for apiserver healthz status ...
	I0417 18:51:33.648626   96006 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0417 18:51:33.653277   96006 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0417 18:51:33.653354   96006 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I0417 18:51:33.653362   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.653370   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.653374   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.654340   96006 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0417 18:51:33.654453   96006 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 18:51:33.654471   96006 api_server.go:131] duration metric: took 5.864211ms to wait for apiserver health ...
	I0417 18:51:33.654480   96006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 18:51:33.826909   96006 request.go:629] Waited for 172.335088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:33.826975   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:33.826981   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:33.826989   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:33.826993   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:33.832448   96006 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0417 18:51:33.837613   96006 system_pods.go:59] 17 kube-system pods found
	I0417 18:51:33.837640   96006 system_pods.go:61] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:51:33.837646   96006 system_pods.go:61] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:51:33.837650   96006 system_pods.go:61] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:51:33.837654   96006 system_pods.go:61] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:51:33.837659   96006 system_pods.go:61] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:51:33.837663   96006 system_pods.go:61] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:51:33.837668   96006 system_pods.go:61] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:51:33.837675   96006 system_pods.go:61] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:51:33.837681   96006 system_pods.go:61] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:51:33.837690   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:51:33.837698   96006 system_pods.go:61] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:51:33.837703   96006 system_pods.go:61] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:51:33.837709   96006 system_pods.go:61] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:51:33.837713   96006 system_pods.go:61] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:51:33.837718   96006 system_pods.go:61] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:51:33.837721   96006 system_pods.go:61] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:51:33.837726   96006 system_pods.go:61] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:51:33.837732   96006 system_pods.go:74] duration metric: took 183.242044ms to wait for pod list to return data ...
	I0417 18:51:33.837743   96006 default_sa.go:34] waiting for default service account to be created ...
	I0417 18:51:34.027238   96006 request.go:629] Waited for 189.408907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:51:34.027344   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:51:34.027356   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.027369   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.027387   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.031074   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:34.031378   96006 default_sa.go:45] found service account: "default"
	I0417 18:51:34.031406   96006 default_sa.go:55] duration metric: took 193.654279ms for default service account to be created ...
	I0417 18:51:34.031422   96006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 18:51:34.226919   96006 request.go:629] Waited for 195.41558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:34.227010   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:51:34.227019   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.227035   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.227045   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.233257   96006 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0417 18:51:34.238042   96006 system_pods.go:86] 17 kube-system pods found
	I0417 18:51:34.238071   96006 system_pods.go:89] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:51:34.238076   96006 system_pods.go:89] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:51:34.238081   96006 system_pods.go:89] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:51:34.238087   96006 system_pods.go:89] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:51:34.238093   96006 system_pods.go:89] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:51:34.238099   96006 system_pods.go:89] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:51:34.238104   96006 system_pods.go:89] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:51:34.238111   96006 system_pods.go:89] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:51:34.238121   96006 system_pods.go:89] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:51:34.238129   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:51:34.238135   96006 system_pods.go:89] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:51:34.238144   96006 system_pods.go:89] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:51:34.238152   96006 system_pods.go:89] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:51:34.238162   96006 system_pods.go:89] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:51:34.238171   96006 system_pods.go:89] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:51:34.238180   96006 system_pods.go:89] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:51:34.238189   96006 system_pods.go:89] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:51:34.238202   96006 system_pods.go:126] duration metric: took 206.768944ms to wait for k8s-apps to be running ...
	I0417 18:51:34.238215   96006 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 18:51:34.238275   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:51:34.255222   96006 system_svc.go:56] duration metric: took 16.993241ms WaitForService to wait for kubelet
	I0417 18:51:34.255257   96006 kubeadm.go:576] duration metric: took 19.361885993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:51:34.255285   96006 node_conditions.go:102] verifying NodePressure condition ...
	I0417 18:51:34.426766   96006 request.go:629] Waited for 171.388544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I0417 18:51:34.426844   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I0417 18:51:34.426849   96006 round_trippers.go:469] Request Headers:
	I0417 18:51:34.426857   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:51:34.426862   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:51:34.430450   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:51:34.431543   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:51:34.431577   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:51:34.431593   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:51:34.431598   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:51:34.431605   96006 node_conditions.go:105] duration metric: took 176.314123ms to run NodePressure ...
	I0417 18:51:34.431620   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:51:34.431657   96006 start.go:254] writing updated cluster config ...
	I0417 18:51:34.433887   96006 out.go:177] 
	I0417 18:51:34.435625   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:34.435731   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:34.437530   96006 out.go:177] * Starting "ha-467706-m03" control-plane node in "ha-467706" cluster
	I0417 18:51:34.438965   96006 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:51:34.438995   96006 cache.go:56] Caching tarball of preloaded images
	I0417 18:51:34.439105   96006 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:51:34.439117   96006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:51:34.439238   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:34.439424   96006 start.go:360] acquireMachinesLock for ha-467706-m03: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:51:34.439471   96006 start.go:364] duration metric: took 26.368µs to acquireMachinesLock for "ha-467706-m03"
	I0417 18:51:34.439490   96006 start.go:93] Provisioning new machine with config: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:51:34.439582   96006 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0417 18:51:34.441053   96006 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 18:51:34.441163   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:51:34.441212   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:51:34.456551   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I0417 18:51:34.457105   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:51:34.457624   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:51:34.457646   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:51:34.457978   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:51:34.458265   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:34.458431   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:34.458625   96006 start.go:159] libmachine.API.Create for "ha-467706" (driver="kvm2")
	I0417 18:51:34.458663   96006 client.go:168] LocalClient.Create starting
	I0417 18:51:34.458711   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 18:51:34.458750   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:51:34.458773   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:51:34.458838   96006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 18:51:34.458866   96006 main.go:141] libmachine: Decoding PEM data...
	I0417 18:51:34.458883   96006 main.go:141] libmachine: Parsing certificate...
	I0417 18:51:34.458907   96006 main.go:141] libmachine: Running pre-create checks...
	I0417 18:51:34.458920   96006 main.go:141] libmachine: (ha-467706-m03) Calling .PreCreateCheck
	I0417 18:51:34.459099   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:34.459607   96006 main.go:141] libmachine: Creating machine...
	I0417 18:51:34.459627   96006 main.go:141] libmachine: (ha-467706-m03) Calling .Create
	I0417 18:51:34.459808   96006 main.go:141] libmachine: (ha-467706-m03) Creating KVM machine...
	I0417 18:51:34.461114   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found existing default KVM network
	I0417 18:51:34.461289   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found existing private KVM network mk-ha-467706
	I0417 18:51:34.461430   96006 main.go:141] libmachine: (ha-467706-m03) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 ...
	I0417 18:51:34.461457   96006 main.go:141] libmachine: (ha-467706-m03) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 18:51:34.461539   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.461418   96690 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:51:34.461604   96006 main.go:141] libmachine: (ha-467706-m03) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 18:51:34.702350   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.702215   96690 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa...
	I0417 18:51:34.869742   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.869577   96690 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/ha-467706-m03.rawdisk...
	I0417 18:51:34.869789   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Writing magic tar header
	I0417 18:51:34.869844   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Writing SSH key tar header
	I0417 18:51:34.869872   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 (perms=drwx------)
	I0417 18:51:34.869891   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:34.869703   96690 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03 ...
	I0417 18:51:34.869911   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03
	I0417 18:51:34.869926   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 18:51:34.869933   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 18:51:34.869949   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 18:51:34.869960   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 18:51:34.869974   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:51:34.869987   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 18:51:34.869996   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 18:51:34.870021   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home/jenkins
	I0417 18:51:34.870047   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Checking permissions on dir: /home
	I0417 18:51:34.870063   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 18:51:34.870079   96006 main.go:141] libmachine: (ha-467706-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 18:51:34.870089   96006 main.go:141] libmachine: (ha-467706-m03) Creating domain...
	I0417 18:51:34.870104   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Skipping /home - not owner
	I0417 18:51:34.871109   96006 main.go:141] libmachine: (ha-467706-m03) define libvirt domain using xml: 
	I0417 18:51:34.871128   96006 main.go:141] libmachine: (ha-467706-m03) <domain type='kvm'>
	I0417 18:51:34.871138   96006 main.go:141] libmachine: (ha-467706-m03)   <name>ha-467706-m03</name>
	I0417 18:51:34.871151   96006 main.go:141] libmachine: (ha-467706-m03)   <memory unit='MiB'>2200</memory>
	I0417 18:51:34.871163   96006 main.go:141] libmachine: (ha-467706-m03)   <vcpu>2</vcpu>
	I0417 18:51:34.871169   96006 main.go:141] libmachine: (ha-467706-m03)   <features>
	I0417 18:51:34.871176   96006 main.go:141] libmachine: (ha-467706-m03)     <acpi/>
	I0417 18:51:34.871183   96006 main.go:141] libmachine: (ha-467706-m03)     <apic/>
	I0417 18:51:34.871196   96006 main.go:141] libmachine: (ha-467706-m03)     <pae/>
	I0417 18:51:34.871206   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871248   96006 main.go:141] libmachine: (ha-467706-m03)   </features>
	I0417 18:51:34.871271   96006 main.go:141] libmachine: (ha-467706-m03)   <cpu mode='host-passthrough'>
	I0417 18:51:34.871280   96006 main.go:141] libmachine: (ha-467706-m03)   
	I0417 18:51:34.871291   96006 main.go:141] libmachine: (ha-467706-m03)   </cpu>
	I0417 18:51:34.871315   96006 main.go:141] libmachine: (ha-467706-m03)   <os>
	I0417 18:51:34.871336   96006 main.go:141] libmachine: (ha-467706-m03)     <type>hvm</type>
	I0417 18:51:34.871348   96006 main.go:141] libmachine: (ha-467706-m03)     <boot dev='cdrom'/>
	I0417 18:51:34.871358   96006 main.go:141] libmachine: (ha-467706-m03)     <boot dev='hd'/>
	I0417 18:51:34.871366   96006 main.go:141] libmachine: (ha-467706-m03)     <bootmenu enable='no'/>
	I0417 18:51:34.871375   96006 main.go:141] libmachine: (ha-467706-m03)   </os>
	I0417 18:51:34.871383   96006 main.go:141] libmachine: (ha-467706-m03)   <devices>
	I0417 18:51:34.871395   96006 main.go:141] libmachine: (ha-467706-m03)     <disk type='file' device='cdrom'>
	I0417 18:51:34.871411   96006 main.go:141] libmachine: (ha-467706-m03)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/boot2docker.iso'/>
	I0417 18:51:34.871424   96006 main.go:141] libmachine: (ha-467706-m03)       <target dev='hdc' bus='scsi'/>
	I0417 18:51:34.871435   96006 main.go:141] libmachine: (ha-467706-m03)       <readonly/>
	I0417 18:51:34.871442   96006 main.go:141] libmachine: (ha-467706-m03)     </disk>
	I0417 18:51:34.871452   96006 main.go:141] libmachine: (ha-467706-m03)     <disk type='file' device='disk'>
	I0417 18:51:34.871465   96006 main.go:141] libmachine: (ha-467706-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 18:51:34.871480   96006 main.go:141] libmachine: (ha-467706-m03)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/ha-467706-m03.rawdisk'/>
	I0417 18:51:34.871492   96006 main.go:141] libmachine: (ha-467706-m03)       <target dev='hda' bus='virtio'/>
	I0417 18:51:34.871500   96006 main.go:141] libmachine: (ha-467706-m03)     </disk>
	I0417 18:51:34.871509   96006 main.go:141] libmachine: (ha-467706-m03)     <interface type='network'>
	I0417 18:51:34.871517   96006 main.go:141] libmachine: (ha-467706-m03)       <source network='mk-ha-467706'/>
	I0417 18:51:34.871525   96006 main.go:141] libmachine: (ha-467706-m03)       <model type='virtio'/>
	I0417 18:51:34.871535   96006 main.go:141] libmachine: (ha-467706-m03)     </interface>
	I0417 18:51:34.871545   96006 main.go:141] libmachine: (ha-467706-m03)     <interface type='network'>
	I0417 18:51:34.871555   96006 main.go:141] libmachine: (ha-467706-m03)       <source network='default'/>
	I0417 18:51:34.871563   96006 main.go:141] libmachine: (ha-467706-m03)       <model type='virtio'/>
	I0417 18:51:34.871572   96006 main.go:141] libmachine: (ha-467706-m03)     </interface>
	I0417 18:51:34.871579   96006 main.go:141] libmachine: (ha-467706-m03)     <serial type='pty'>
	I0417 18:51:34.871585   96006 main.go:141] libmachine: (ha-467706-m03)       <target port='0'/>
	I0417 18:51:34.871615   96006 main.go:141] libmachine: (ha-467706-m03)     </serial>
	I0417 18:51:34.871633   96006 main.go:141] libmachine: (ha-467706-m03)     <console type='pty'>
	I0417 18:51:34.871648   96006 main.go:141] libmachine: (ha-467706-m03)       <target type='serial' port='0'/>
	I0417 18:51:34.871656   96006 main.go:141] libmachine: (ha-467706-m03)     </console>
	I0417 18:51:34.871666   96006 main.go:141] libmachine: (ha-467706-m03)     <rng model='virtio'>
	I0417 18:51:34.871693   96006 main.go:141] libmachine: (ha-467706-m03)       <backend model='random'>/dev/random</backend>
	I0417 18:51:34.871705   96006 main.go:141] libmachine: (ha-467706-m03)     </rng>
	I0417 18:51:34.871715   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871723   96006 main.go:141] libmachine: (ha-467706-m03)     
	I0417 18:51:34.871733   96006 main.go:141] libmachine: (ha-467706-m03)   </devices>
	I0417 18:51:34.871745   96006 main.go:141] libmachine: (ha-467706-m03) </domain>
	I0417 18:51:34.871751   96006 main.go:141] libmachine: (ha-467706-m03) 
	I0417 18:51:34.878872   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:20:46:22 in network default
	I0417 18:51:34.879554   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring networks are active...
	I0417 18:51:34.879573   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:34.880392   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring network default is active
	I0417 18:51:34.880661   96006 main.go:141] libmachine: (ha-467706-m03) Ensuring network mk-ha-467706 is active
	I0417 18:51:34.881007   96006 main.go:141] libmachine: (ha-467706-m03) Getting domain xml...
	I0417 18:51:34.881716   96006 main.go:141] libmachine: (ha-467706-m03) Creating domain...
	I0417 18:51:36.106921   96006 main.go:141] libmachine: (ha-467706-m03) Waiting to get IP...
	I0417 18:51:36.107774   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.108244   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.108290   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.108235   96690 retry.go:31] will retry after 259.688955ms: waiting for machine to come up
	I0417 18:51:36.369919   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.370449   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.370484   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.370393   96690 retry.go:31] will retry after 263.833952ms: waiting for machine to come up
	I0417 18:51:36.636049   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:36.636520   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:36.636546   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:36.636478   96690 retry.go:31] will retry after 477.100713ms: waiting for machine to come up
	I0417 18:51:37.115192   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:37.115714   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:37.115748   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:37.115659   96690 retry.go:31] will retry after 585.751769ms: waiting for machine to come up
	I0417 18:51:37.703494   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:37.704022   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:37.704046   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:37.703971   96690 retry.go:31] will retry after 480.911798ms: waiting for machine to come up
	I0417 18:51:38.186810   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:38.187304   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:38.187336   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:38.187235   96690 retry.go:31] will retry after 741.971724ms: waiting for machine to come up
	I0417 18:51:38.931059   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:38.931460   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:38.931485   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:38.931420   96690 retry.go:31] will retry after 818.006613ms: waiting for machine to come up
	I0417 18:51:39.751433   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:39.751984   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:39.752015   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:39.751936   96690 retry.go:31] will retry after 1.076985012s: waiting for machine to come up
	I0417 18:51:40.830953   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:40.831445   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:40.831485   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:40.831385   96690 retry.go:31] will retry after 1.317961563s: waiting for machine to come up
	I0417 18:51:42.150497   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:42.150927   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:42.150950   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:42.150902   96690 retry.go:31] will retry after 1.665893506s: waiting for machine to come up
	I0417 18:51:43.818870   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:43.819324   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:43.819354   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:43.819268   96690 retry.go:31] will retry after 2.909952059s: waiting for machine to come up
	I0417 18:51:46.730539   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:46.731025   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:46.731049   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:46.730987   96690 retry.go:31] will retry after 3.59067388s: waiting for machine to come up
	I0417 18:51:50.322830   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:50.323323   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:50.323355   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:50.323263   96690 retry.go:31] will retry after 3.540199243s: waiting for machine to come up
	I0417 18:51:53.866714   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:53.867100   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find current IP address of domain ha-467706-m03 in network mk-ha-467706
	I0417 18:51:53.867130   96006 main.go:141] libmachine: (ha-467706-m03) DBG | I0417 18:51:53.867046   96690 retry.go:31] will retry after 3.58223567s: waiting for machine to come up
	I0417 18:51:57.450494   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.450992   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.451015   96006 main.go:141] libmachine: (ha-467706-m03) Found IP for machine: 192.168.39.250
	I0417 18:51:57.451069   96006 main.go:141] libmachine: (ha-467706-m03) Reserving static IP address...
	I0417 18:51:57.451464   96006 main.go:141] libmachine: (ha-467706-m03) DBG | unable to find host DHCP lease matching {name: "ha-467706-m03", mac: "52:54:00:93:9e:a9", ip: "192.168.39.250"} in network mk-ha-467706
	I0417 18:51:57.528115   96006 main.go:141] libmachine: (ha-467706-m03) Reserved static IP address: 192.168.39.250
	I0417 18:51:57.528155   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Getting to WaitForSSH function...
	I0417 18:51:57.528164   96006 main.go:141] libmachine: (ha-467706-m03) Waiting for SSH to be available...
	I0417 18:51:57.531251   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.531739   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.531764   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.531909   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using SSH client type: external
	I0417 18:51:57.531940   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa (-rw-------)
	I0417 18:51:57.531977   96006 main.go:141] libmachine: (ha-467706-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 18:51:57.531996   96006 main.go:141] libmachine: (ha-467706-m03) DBG | About to run SSH command:
	I0417 18:51:57.532010   96006 main.go:141] libmachine: (ha-467706-m03) DBG | exit 0
	I0417 18:51:57.660953   96006 main.go:141] libmachine: (ha-467706-m03) DBG | SSH cmd err, output: <nil>: 
	I0417 18:51:57.661254   96006 main.go:141] libmachine: (ha-467706-m03) KVM machine creation complete!
	I0417 18:51:57.661574   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:57.662186   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:57.662357   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:57.662518   96006 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 18:51:57.662533   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:51:57.663657   96006 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 18:51:57.663671   96006 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 18:51:57.663677   96006 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 18:51:57.663683   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.665980   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.666354   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.666381   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.666482   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.666701   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.666883   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.667009   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.667140   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.667390   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.667406   96006 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 18:51:57.776433   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:51:57.776460   96006 main.go:141] libmachine: Detecting the provisioner...
	I0417 18:51:57.776470   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.779557   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.780066   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.780108   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.780214   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.780447   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.780635   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.780844   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.781095   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.781318   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.781334   96006 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 18:51:57.890734   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 18:51:57.890841   96006 main.go:141] libmachine: found compatible host: buildroot
	I0417 18:51:57.890855   96006 main.go:141] libmachine: Provisioning with buildroot...
	I0417 18:51:57.890869   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:57.891166   96006 buildroot.go:166] provisioning hostname "ha-467706-m03"
	I0417 18:51:57.891206   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:57.891442   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:57.894112   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.894576   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:57.894606   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:57.894749   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:57.894916   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.895074   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:57.895260   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:57.895416   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:57.895592   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:57.895604   96006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706-m03 && echo "ha-467706-m03" | sudo tee /etc/hostname
	I0417 18:51:58.022957   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706-m03
	
	I0417 18:51:58.022997   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.026141   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.026552   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.026585   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.026799   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.027009   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.027194   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.027452   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.027784   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.028015   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.028040   96006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:51:58.147117   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:51:58.147158   96006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:51:58.147176   96006 buildroot.go:174] setting up certificates
	I0417 18:51:58.147187   96006 provision.go:84] configureAuth start
	I0417 18:51:58.147197   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetMachineName
	I0417 18:51:58.147514   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:58.150495   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.150863   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.150904   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.151108   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.153368   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.153737   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.153757   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.153954   96006 provision.go:143] copyHostCerts
	I0417 18:51:58.153999   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:51:58.154045   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:51:58.154059   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:51:58.154141   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:51:58.154273   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:51:58.154307   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:51:58.154318   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:51:58.154358   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:51:58.154424   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:51:58.154448   96006 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:51:58.154457   96006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:51:58.154489   96006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:51:58.154574   96006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706-m03 san=[127.0.0.1 192.168.39.250 ha-467706-m03 localhost minikube]
	I0417 18:51:58.356725   96006 provision.go:177] copyRemoteCerts
	I0417 18:51:58.356820   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:51:58.356856   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.359545   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.359943   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.359981   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.360159   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.360359   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.360546   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.360688   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:58.443213   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:51:58.443311   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:51:58.471921   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:51:58.472006   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0417 18:51:58.498008   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:51:58.498081   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:51:58.524744   96006 provision.go:87] duration metric: took 377.543951ms to configureAuth
	I0417 18:51:58.524790   96006 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:51:58.525095   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:51:58.525185   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.527959   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.528315   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.528338   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.528543   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.528758   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.528953   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.529129   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.529297   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.529463   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.529477   96006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 18:51:58.811996   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 18:51:58.812039   96006 main.go:141] libmachine: Checking connection to Docker...
	I0417 18:51:58.812051   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetURL
	I0417 18:51:58.813614   96006 main.go:141] libmachine: (ha-467706-m03) DBG | Using libvirt version 6000000
	I0417 18:51:58.815988   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.816312   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.816337   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.816495   96006 main.go:141] libmachine: Docker is up and running!
	I0417 18:51:58.816517   96006 main.go:141] libmachine: Reticulating splines...
	I0417 18:51:58.816526   96006 client.go:171] duration metric: took 24.357851209s to LocalClient.Create
	I0417 18:51:58.816556   96006 start.go:167] duration metric: took 24.357933069s to libmachine.API.Create "ha-467706"
	I0417 18:51:58.816567   96006 start.go:293] postStartSetup for "ha-467706-m03" (driver="kvm2")
	I0417 18:51:58.816579   96006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 18:51:58.816597   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:58.816866   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 18:51:58.816890   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.819416   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.819788   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.819817   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.819908   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.820089   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.820264   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.820455   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:58.903423   96006 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 18:51:58.907793   96006 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 18:51:58.907829   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 18:51:58.907891   96006 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 18:51:58.907963   96006 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 18:51:58.907973   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 18:51:58.908078   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 18:51:58.917857   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:51:58.944503   96006 start.go:296] duration metric: took 127.921184ms for postStartSetup
	I0417 18:51:58.944571   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetConfigRaw
	I0417 18:51:58.945214   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:58.947772   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.948095   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.948138   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.948469   96006 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:51:58.948684   96006 start.go:128] duration metric: took 24.509091391s to createHost
	I0417 18:51:58.948711   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:58.951031   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.951386   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:58.951416   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:58.951598   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:58.951807   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.951995   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:58.952112   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:58.952234   96006 main.go:141] libmachine: Using SSH client type: native
	I0417 18:51:58.952417   96006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0417 18:51:58.952429   96006 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 18:51:59.061669   96006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713379919.025305300
	
	I0417 18:51:59.061695   96006 fix.go:216] guest clock: 1713379919.025305300
	I0417 18:51:59.061704   96006 fix.go:229] Guest: 2024-04-17 18:51:59.0253053 +0000 UTC Remote: 2024-04-17 18:51:58.948697509 +0000 UTC m=+156.376044537 (delta=76.607791ms)
	I0417 18:51:59.061723   96006 fix.go:200] guest clock delta is within tolerance: 76.607791ms
	I0417 18:51:59.061730   96006 start.go:83] releasing machines lock for "ha-467706-m03", held for 24.622249744s
	I0417 18:51:59.061754   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.062041   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:51:59.064824   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.065192   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.065231   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.067675   96006 out.go:177] * Found network options:
	I0417 18:51:59.069081   96006 out.go:177]   - NO_PROXY=192.168.39.159,192.168.39.236
	W0417 18:51:59.070321   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	W0417 18:51:59.070343   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:51:59.070360   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071000   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071241   96006 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:51:59.071364   96006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 18:51:59.071410   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	W0417 18:51:59.071447   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	W0417 18:51:59.071470   96006 proxy.go:119] fail to check proxy env: Error ip not in block
	I0417 18:51:59.071539   96006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 18:51:59.071564   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:51:59.075438   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.075906   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076399   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.076426   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076632   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:51:59.076660   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:59.076671   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:51:59.076837   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:51:59.076882   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:59.076982   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:51:59.077030   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:59.077184   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:51:59.077180   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:59.077333   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:51:59.312868   96006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 18:51:59.319779   96006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 18:51:59.319848   96006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 18:51:59.337213   96006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 18:51:59.337240   96006 start.go:494] detecting cgroup driver to use...
	I0417 18:51:59.337303   96006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 18:51:59.356790   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 18:51:59.371164   96006 docker.go:217] disabling cri-docker service (if available) ...
	I0417 18:51:59.371221   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 18:51:59.385299   96006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 18:51:59.399551   96006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 18:51:59.511492   96006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 18:51:59.680128   96006 docker.go:233] disabling docker service ...
	I0417 18:51:59.680200   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 18:51:59.695980   96006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 18:51:59.709911   96006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 18:51:59.853481   96006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 18:51:59.975362   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 18:51:59.990715   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 18:52:00.014556   96006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 18:52:00.014646   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.026160   96006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 18:52:00.026224   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.037683   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.049269   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.060422   96006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 18:52:00.072142   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.083153   96006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.103537   96006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 18:52:00.115363   96006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 18:52:00.125483   96006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 18:52:00.125543   96006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 18:52:00.140258   96006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 18:52:00.151232   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:00.284671   96006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 18:52:00.434033   96006 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 18:52:00.434122   96006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 18:52:00.439963   96006 start.go:562] Will wait 60s for crictl version
	I0417 18:52:00.440079   96006 ssh_runner.go:195] Run: which crictl
	I0417 18:52:00.444073   96006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 18:52:00.487475   96006 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 18:52:00.487559   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:52:00.516105   96006 ssh_runner.go:195] Run: crio --version
	I0417 18:52:00.546634   96006 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 18:52:00.548096   96006 out.go:177]   - env NO_PROXY=192.168.39.159
	I0417 18:52:00.549484   96006 out.go:177]   - env NO_PROXY=192.168.39.159,192.168.39.236
	I0417 18:52:00.550999   96006 main.go:141] libmachine: (ha-467706-m03) Calling .GetIP
	I0417 18:52:00.553930   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:52:00.554200   96006 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:52:00.554219   96006 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:52:00.554407   96006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 18:52:00.559384   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:52:00.573146   96006 mustload.go:65] Loading cluster: ha-467706
	I0417 18:52:00.573371   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:52:00.573651   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:00.573700   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:00.588993   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0417 18:52:00.589529   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:00.590035   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:00.590058   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:00.590436   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:00.590616   96006 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:52:00.592317   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:52:00.592729   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:00.592804   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:00.607728   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0417 18:52:00.608165   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:00.608719   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:00.608750   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:00.609117   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:00.609294   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:52:00.609463   96006 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.250
	I0417 18:52:00.609482   96006 certs.go:194] generating shared ca certs ...
	I0417 18:52:00.609497   96006 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.609655   96006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 18:52:00.609709   96006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 18:52:00.609724   96006 certs.go:256] generating profile certs ...
	I0417 18:52:00.609820   96006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 18:52:00.609850   96006 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9
	I0417 18:52:00.609869   96006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.250 192.168.39.254]
	I0417 18:52:00.749277   96006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 ...
	I0417 18:52:00.749320   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9: {Name:mk6143d78cf42a990aa606d474f97e8b4fd0619a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.749616   96006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9 ...
	I0417 18:52:00.749648   96006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9: {Name:mk3bb67b98f87c1d920beec47452d123a46411b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 18:52:00.749762   96006 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.75a453e9 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 18:52:00.749897   96006 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.75a453e9 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 18:52:00.750025   96006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 18:52:00.750042   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 18:52:00.750054   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 18:52:00.750068   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 18:52:00.750081   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 18:52:00.750098   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 18:52:00.750110   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 18:52:00.750122   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 18:52:00.750134   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 18:52:00.750185   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 18:52:00.750214   96006 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 18:52:00.750223   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 18:52:00.750245   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 18:52:00.750267   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 18:52:00.750290   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 18:52:00.750324   96006 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 18:52:00.750348   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:00.750362   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 18:52:00.750374   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 18:52:00.750409   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:52:00.753425   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:00.753882   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:52:00.753914   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:00.754091   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:52:00.754308   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:52:00.754433   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:52:00.754565   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:52:00.825222   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0417 18:52:00.830594   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0417 18:52:00.843572   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0417 18:52:00.848432   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0417 18:52:00.860990   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0417 18:52:00.868349   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0417 18:52:00.881489   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0417 18:52:00.886462   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0417 18:52:00.902096   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0417 18:52:00.907262   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0417 18:52:00.920906   96006 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0417 18:52:00.926468   96006 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0417 18:52:00.941097   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 18:52:00.969343   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 18:52:00.997366   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 18:52:01.024743   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 18:52:01.054703   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0417 18:52:01.082003   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 18:52:01.109194   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 18:52:01.136303   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 18:52:01.162022   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 18:52:01.187722   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 18:52:01.214107   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 18:52:01.241348   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0417 18:52:01.261810   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0417 18:52:01.282278   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0417 18:52:01.301494   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0417 18:52:01.320460   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0417 18:52:01.339415   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0417 18:52:01.357976   96006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0417 18:52:01.375690   96006 ssh_runner.go:195] Run: openssl version
	I0417 18:52:01.381609   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 18:52:01.392798   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.397733   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.397804   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 18:52:01.403864   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 18:52:01.415745   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 18:52:01.428109   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.432877   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.432941   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 18:52:01.439267   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 18:52:01.450923   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 18:52:01.462324   96006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.467307   96006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.467382   96006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 18:52:01.473331   96006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 18:52:01.484248   96006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 18:52:01.488720   96006 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 18:52:01.488810   96006 kubeadm.go:928] updating node {m03 192.168.39.250 8443 v1.30.0-rc.2 crio true true} ...
	I0417 18:52:01.488925   96006 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 18:52:01.488961   96006 kube-vip.go:111] generating kube-vip config ...
	I0417 18:52:01.489005   96006 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 18:52:01.505434   96006 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 18:52:01.505511   96006 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 18:52:01.505571   96006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:52:01.517039   96006 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0-rc.2': No such file or directory
	
	Initiating transfer...
	I0417 18:52:01.517114   96006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 18:52:01.528436   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubeadm.sha256
	I0417 18:52:01.528460   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubelet.sha256
	I0417 18:52:01.528473   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:52:01.528483   96006 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0417 18:52:01.528495   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:52:01.528516   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:52:01.528548   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm
	I0417 18:52:01.528549   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl
	I0417 18:52:01.533531   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubectl': No such file or directory
	I0417 18:52:01.533565   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl (51454104 bytes)
	I0417 18:52:01.556412   96006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet -> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:52:01.556557   96006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet
	I0417 18:52:01.569927   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm': No such file or directory
	I0417 18:52:01.569973   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubeadm --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubeadm (50249880 bytes)
	I0417 18:52:01.615729   96006 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet': No such file or directory
	I0417 18:52:01.615777   96006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubelet --> /var/lib/minikube/binaries/v1.30.0-rc.2/kubelet (100100024 bytes)
	I0417 18:52:02.505814   96006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0417 18:52:02.516731   96006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0417 18:52:02.537107   96006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 18:52:02.555786   96006 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 18:52:02.574491   96006 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 18:52:02.580118   96006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 18:52:02.594496   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:02.741413   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:52:02.761049   96006 host.go:66] Checking if "ha-467706" exists ...
	I0417 18:52:02.761373   96006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:52:02.761425   96006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:52:02.776757   96006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0417 18:52:02.777305   96006 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:52:02.777802   96006 main.go:141] libmachine: Using API Version  1
	I0417 18:52:02.777823   96006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:52:02.778163   96006 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:52:02.778370   96006 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:52:02.778550   96006 start.go:316] joinCluster: &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 Cl
usterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:52:02.778748   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0417 18:52:02.778783   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:52:02.781899   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:02.782379   96006 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:52:02.782406   96006 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:52:02.782580   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:52:02.782768   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:52:02.782986   96006 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:52:02.783174   96006 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:52:02.957773   96006 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:52:02.957846   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1y032b.zfoaqppbodvod22o --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0417 18:52:27.749596   96006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1y032b.zfoaqppbodvod22o --discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-467706-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (24.791722831s)
	I0417 18:52:27.749646   96006 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0417 18:52:28.385191   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-467706-m03 minikube.k8s.io/updated_at=2024_04_17T18_52_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3 minikube.k8s.io/name=ha-467706 minikube.k8s.io/primary=false
	I0417 18:52:28.504077   96006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-467706-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0417 18:52:28.621348   96006 start.go:318] duration metric: took 25.842793494s to joinCluster
	I0417 18:52:28.621444   96006 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 18:52:28.623236   96006 out.go:177] * Verifying Kubernetes components...
	I0417 18:52:28.621761   96006 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:52:28.624642   96006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 18:52:28.898574   96006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 18:52:28.923168   96006 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:52:28.923555   96006 kapi.go:59] client config for ha-467706: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0417 18:52:28.923651   96006 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.159:8443
	I0417 18:52:28.923966   96006 node_ready.go:35] waiting up to 6m0s for node "ha-467706-m03" to be "Ready" ...
	I0417 18:52:28.924090   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:28.924104   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:28.924115   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:28.924120   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:28.927651   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:29.424873   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:29.424902   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:29.424914   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:29.424921   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:29.428630   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:29.924950   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:29.924978   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:29.924989   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:29.924995   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:29.929352   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:30.424980   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:30.425005   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:30.425016   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:30.425021   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:30.433101   96006 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0417 18:52:30.924634   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:30.924655   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:30.924663   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:30.924666   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:30.929216   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:30.930081   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:31.424251   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:31.424276   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:31.424287   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:31.424294   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:31.427884   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:31.925119   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:31.925141   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:31.925150   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:31.925153   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:31.929439   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:32.425059   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:32.425082   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:32.425090   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:32.425095   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:32.428735   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:32.925083   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:32.925105   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:32.925113   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:32.925117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:32.928326   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:33.424762   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:33.424797   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:33.424806   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:33.424810   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:33.428416   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:33.429153   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:33.924533   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:33.924556   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:33.924565   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:33.924569   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:33.928140   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:34.424439   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:34.424464   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:34.424472   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:34.424476   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:34.428193   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:34.925087   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:34.925111   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:34.925120   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:34.925125   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:34.928677   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:35.425172   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:35.425200   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:35.425210   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:35.425218   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:35.429160   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:35.429880   96006 node_ready.go:53] node "ha-467706-m03" has status "Ready":"False"
	I0417 18:52:35.925045   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:35.925069   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:35.925081   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:35.925086   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:35.929103   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.424535   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:36.424572   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.424584   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.424592   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.428304   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.925072   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:36.925098   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.925107   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.925112   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.929080   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.929865   96006 node_ready.go:49] node "ha-467706-m03" has status "Ready":"True"
	I0417 18:52:36.929899   96006 node_ready.go:38] duration metric: took 8.005907796s for node "ha-467706-m03" to be "Ready" ...
	I0417 18:52:36.929921   96006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:52:36.930010   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:36.930023   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.930035   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.930040   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.936875   96006 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0417 18:52:36.943562   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.943658   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-56dz8
	I0417 18:52:36.943671   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.943682   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.943691   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.946981   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.947848   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.947865   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.947874   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.947877   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.951124   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.951882   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.951909   96006 pod_ready.go:81] duration metric: took 8.318595ms for pod "coredns-7db6d8ff4d-56dz8" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.951922   96006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.952004   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kcdqn
	I0417 18:52:36.952016   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.952025   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.952030   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.954946   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.956375   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.956391   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.956398   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.956402   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.960496   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:36.961176   96006 pod_ready.go:92] pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.961195   96006 pod_ready.go:81] duration metric: took 9.26631ms for pod "coredns-7db6d8ff4d-kcdqn" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.961204   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.961424   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706
	I0417 18:52:36.961444   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.961453   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.961460   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.964198   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.964962   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:36.964980   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.964990   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.964996   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.968568   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.969736   96006 pod_ready.go:92] pod "etcd-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.969755   96006 pod_ready.go:81] duration metric: took 8.543441ms for pod "etcd-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.969767   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.969836   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m02
	I0417 18:52:36.969846   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.969856   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.969864   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.973322   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:36.974022   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:36.974039   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:36.974049   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:36.974056   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:36.977049   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:36.977876   96006 pod_ready.go:92] pod "etcd-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:36.977893   96006 pod_ready.go:81] duration metric: took 8.118952ms for pod "etcd-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:36.977902   96006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:37.125207   96006 request.go:629] Waited for 147.237578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.125304   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.125317   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.125327   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.125335   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.129265   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.325969   96006 request.go:629] Waited for 195.91734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.326062   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.326069   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.326081   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.326087   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.329735   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.525979   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.526008   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.526024   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.526030   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.530532   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:37.725519   96006 request.go:629] Waited for 194.121985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.725593   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:37.725601   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.725612   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.725631   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.728991   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:37.978697   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:37.978723   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:37.978732   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:37.978736   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:37.982744   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.125959   96006 request.go:629] Waited for 142.267335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.126016   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.126020   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.126028   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.126033   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.129065   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.478351   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:38.478382   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.478393   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.478400   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.482019   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.526015   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.526040   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.526053   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.526057   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.530125   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:38.979130   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:38.979167   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.979181   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.979191   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.983274   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:38.984058   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:38.984074   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:38.984082   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:38.984087   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:38.987121   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:38.987824   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:39.478744   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:39.478767   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.478776   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.478780   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.482249   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:39.482930   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:39.482947   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.482955   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.482958   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.485711   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:39.978405   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:39.978428   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.978437   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.978441   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.983307   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:39.984232   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:39.984251   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:39.984260   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:39.984265   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:39.987576   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.478611   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:40.478635   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.478649   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.478654   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.482185   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.482957   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:40.482974   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.482982   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.482990   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.485806   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:40.978434   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:40.978458   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.978466   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.978470   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.982366   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:40.983312   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:40.983326   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:40.983334   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:40.983337   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:40.986184   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:41.478160   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:41.478186   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.478195   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.478205   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.482826   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:41.483571   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:41.483587   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.483597   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.483606   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.486864   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:41.487510   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:41.978993   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:41.979015   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.979031   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.979035   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.982762   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:41.983777   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:41.983799   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:41.983811   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:41.983817   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:41.987561   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.478573   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:42.478599   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.478609   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.478623   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.482464   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.483352   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:42.483371   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.483379   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.483384   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.486473   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:42.978778   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:42.978800   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.978808   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.978812   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.982956   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:42.984208   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:42.984229   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:42.984240   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:42.984245   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:42.987545   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:43.479123   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:43.479160   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.479173   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.479178   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.482599   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:43.483500   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:43.483519   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.483529   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.483538   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.488219   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:43.488884   96006 pod_ready.go:102] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"False"
	I0417 18:52:43.978220   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:43.978245   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.978254   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.978258   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.982430   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:43.983082   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:43.983101   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:43.983111   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:43.983117   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:43.986478   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.478768   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-ha-467706-m03
	I0417 18:52:44.478792   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.478800   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.478805   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.482390   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.482993   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:44.483009   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.483017   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.483022   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.486288   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.486885   96006 pod_ready.go:92] pod "etcd-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.486907   96006 pod_ready.go:81] duration metric: took 7.508997494s for pod "etcd-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.486931   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.487003   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706
	I0417 18:52:44.487014   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.487024   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.487033   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.489839   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.490630   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.490648   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.490659   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.490666   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.493317   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.493787   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.493806   96006 pod_ready.go:81] duration metric: took 6.868213ms for pod "kube-apiserver-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.493815   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.493866   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m02
	I0417 18:52:44.493875   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.493881   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.493885   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.496618   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.497206   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:44.497221   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.497228   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.497232   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.499850   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.500328   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.500346   96006 pod_ready.go:81] duration metric: took 6.524042ms for pod "kube-apiserver-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.500354   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.500398   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-467706-m03
	I0417 18:52:44.500406   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.500413   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.500416   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.502820   96006 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0417 18:52:44.525635   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:44.525658   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.525668   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.525674   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.529459   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.529911   96006 pod_ready.go:92] pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.529931   96006 pod_ready.go:81] duration metric: took 29.571504ms for pod "kube-apiserver-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.529941   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.725298   96006 request.go:629] Waited for 195.288314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:52:44.725379   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706
	I0417 18:52:44.725389   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.725400   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.725411   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.729804   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:44.925812   96006 request.go:629] Waited for 195.39534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.925880   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:44.925888   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:44.925896   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:44.925913   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:44.929676   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:44.930553   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:44.930576   96006 pod_ready.go:81] duration metric: took 400.628962ms for pod "kube-controller-manager-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:44.930586   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.125857   96006 request.go:629] Waited for 195.191389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:52:45.125925   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m02
	I0417 18:52:45.125932   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.125942   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.125948   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.129306   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.325424   96006 request.go:629] Waited for 195.384898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:45.325501   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:45.325507   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.325517   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.325525   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.329384   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.329913   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:45.329940   96006 pod_ready.go:81] duration metric: took 399.345281ms for pod "kube-controller-manager-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.329956   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.526037   96006 request.go:629] Waited for 195.983642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m03
	I0417 18:52:45.526113   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-467706-m03
	I0417 18:52:45.526118   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.526125   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.526129   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.529953   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.725683   96006 request.go:629] Waited for 195.070685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:45.725758   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:45.725766   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.725782   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.725790   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.729681   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:45.730371   96006 pod_ready.go:92] pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:45.730395   96006 pod_ready.go:81] duration metric: took 400.429466ms for pod "kube-controller-manager-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.730409   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:45.925377   96006 request.go:629] Waited for 194.888012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:52:45.925484   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd469
	I0417 18:52:45.925490   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:45.925498   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:45.925505   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:45.929973   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:46.125654   96006 request.go:629] Waited for 194.431513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:46.125734   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:46.125743   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.125755   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.125762   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.129903   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:46.130743   96006 pod_ready.go:92] pod "kube-proxy-hd469" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.130767   96006 pod_ready.go:81] duration metric: took 400.350111ms for pod "kube-proxy-hd469" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.130779   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jlcq7" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.325821   96006 request.go:629] Waited for 194.963898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jlcq7
	I0417 18:52:46.325910   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jlcq7
	I0417 18:52:46.325921   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.325931   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.325940   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.329752   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.526022   96006 request.go:629] Waited for 195.476053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:46.526102   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:46.526108   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.526116   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.526121   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.529860   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.530884   96006 pod_ready.go:92] pod "kube-proxy-jlcq7" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.530902   96006 pod_ready.go:81] duration metric: took 400.117162ms for pod "kube-proxy-jlcq7" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.530913   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.726192   96006 request.go:629] Waited for 195.191733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:52:46.726277   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxtf4
	I0417 18:52:46.726285   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.726295   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.726299   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.729763   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.925722   96006 request.go:629] Waited for 195.405879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:46.925783   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:46.925788   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:46.925795   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:46.925803   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:46.929239   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:46.930056   96006 pod_ready.go:92] pod "kube-proxy-qxtf4" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:46.930085   96006 pod_ready.go:81] duration metric: took 399.165332ms for pod "kube-proxy-qxtf4" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:46.930101   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.126140   96006 request.go:629] Waited for 195.938622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:52:47.126218   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706
	I0417 18:52:47.126224   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.126232   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.126237   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.129831   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.325630   96006 request.go:629] Waited for 195.196456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:47.325703   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706
	I0417 18:52:47.325716   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.325735   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.325746   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.329278   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.330132   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:47.330153   96006 pod_ready.go:81] duration metric: took 400.03338ms for pod "kube-scheduler-ha-467706" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.330165   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.525201   96006 request.go:629] Waited for 194.96066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:52:47.525294   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m02
	I0417 18:52:47.525304   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.525312   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.525316   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.528656   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:47.725569   96006 request.go:629] Waited for 196.155088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:47.725626   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m02
	I0417 18:52:47.725631   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.725639   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.725643   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.730074   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:47.730804   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:47.730829   96006 pod_ready.go:81] duration metric: took 400.655349ms for pod "kube-scheduler-ha-467706-m02" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.730843   96006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:47.925724   96006 request.go:629] Waited for 194.787766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m03
	I0417 18:52:47.925810   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-467706-m03
	I0417 18:52:47.925822   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:47.925829   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:47.925834   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:47.929948   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:48.125180   96006 request.go:629] Waited for 194.303544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:48.125265   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/ha-467706-m03
	I0417 18:52:48.125271   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.125280   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.125285   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.128743   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:48.129451   96006 pod_ready.go:92] pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace has status "Ready":"True"
	I0417 18:52:48.129480   96006 pod_ready.go:81] duration metric: took 398.623814ms for pod "kube-scheduler-ha-467706-m03" in "kube-system" namespace to be "Ready" ...
	I0417 18:52:48.129495   96006 pod_ready.go:38] duration metric: took 11.199554563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 18:52:48.129516   96006 api_server.go:52] waiting for apiserver process to appear ...
	I0417 18:52:48.129583   96006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:52:48.146310   96006 api_server.go:72] duration metric: took 19.524820593s to wait for apiserver process to appear ...
	I0417 18:52:48.146344   96006 api_server.go:88] waiting for apiserver healthz status ...
	I0417 18:52:48.146377   96006 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0417 18:52:48.153650   96006 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0417 18:52:48.153862   96006 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I0417 18:52:48.153876   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.153888   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.153894   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.154829   96006 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0417 18:52:48.154892   96006 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 18:52:48.154905   96006 api_server.go:131] duration metric: took 8.55388ms to wait for apiserver health ...
	I0417 18:52:48.154914   96006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 18:52:48.325249   96006 request.go:629] Waited for 170.263186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.325330   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.325336   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.325353   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.325361   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.332685   96006 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0417 18:52:48.339789   96006 system_pods.go:59] 24 kube-system pods found
	I0417 18:52:48.339819   96006 system_pods.go:61] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:52:48.339823   96006 system_pods.go:61] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:52:48.339827   96006 system_pods.go:61] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:52:48.339830   96006 system_pods.go:61] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:52:48.339833   96006 system_pods.go:61] "etcd-ha-467706-m03" [a79f9120-9c62-465e-8e06-97337c1eecd9] Running
	I0417 18:52:48.339836   96006 system_pods.go:61] "kindnet-5mvhn" [1d1c6ddb-22cf-489e-8958-41434cbf8b0c] Running
	I0417 18:52:48.339839   96006 system_pods.go:61] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:52:48.339842   96006 system_pods.go:61] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:52:48.339844   96006 system_pods.go:61] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:52:48.339848   96006 system_pods.go:61] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:52:48.339850   96006 system_pods.go:61] "kube-apiserver-ha-467706-m03" [69a7a929-d717-4c2f-9cca-c067dcc7610d] Running
	I0417 18:52:48.339853   96006 system_pods.go:61] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:52:48.339856   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:52:48.339860   96006 system_pods.go:61] "kube-controller-manager-ha-467706-m03" [ae6eeeac-1ab7-4b22-8691-69d534d6d73e] Running
	I0417 18:52:48.339862   96006 system_pods.go:61] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:52:48.339865   96006 system_pods.go:61] "kube-proxy-jlcq7" [05590f74-8ea6-42ef-9d72-33e15cfd3a32] Running
	I0417 18:52:48.339868   96006 system_pods.go:61] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:52:48.339870   96006 system_pods.go:61] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:52:48.339873   96006 system_pods.go:61] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:52:48.339876   96006 system_pods.go:61] "kube-scheduler-ha-467706-m03" [94c0d749-a1da-468b-baee-25f5177376e5] Running
	I0417 18:52:48.339878   96006 system_pods.go:61] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:52:48.339881   96006 system_pods.go:61] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:52:48.339884   96006 system_pods.go:61] "kube-vip-ha-467706-m03" [75d15d4b-ed49-4d98-aecd-713bead1e281] Running
	I0417 18:52:48.339887   96006 system_pods.go:61] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:52:48.339894   96006 system_pods.go:74] duration metric: took 184.973208ms to wait for pod list to return data ...
	I0417 18:52:48.339904   96006 default_sa.go:34] waiting for default service account to be created ...
	I0417 18:52:48.525253   96006 request.go:629] Waited for 185.260041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:52:48.525328   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I0417 18:52:48.525334   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.525343   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.525347   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.529005   96006 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0417 18:52:48.529165   96006 default_sa.go:45] found service account: "default"
	I0417 18:52:48.529185   96006 default_sa.go:55] duration metric: took 189.273406ms for default service account to be created ...
	I0417 18:52:48.529198   96006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 18:52:48.725658   96006 request.go:629] Waited for 196.383891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.725738   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I0417 18:52:48.725744   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.725752   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.725757   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.732978   96006 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0417 18:52:48.738859   96006 system_pods.go:86] 24 kube-system pods found
	I0417 18:52:48.738891   96006 system_pods.go:89] "coredns-7db6d8ff4d-56dz8" [242dc56e-69d4-4742-8c4a-26b465f94153] Running
	I0417 18:52:48.738897   96006 system_pods.go:89] "coredns-7db6d8ff4d-kcdqn" [5353b60b-c7db-4eac-b0e9-915a8df02ae6] Running
	I0417 18:52:48.738902   96006 system_pods.go:89] "etcd-ha-467706" [500bd5cb-de50-4277-8098-f412bc51408d] Running
	I0417 18:52:48.738906   96006 system_pods.go:89] "etcd-ha-467706-m02" [e5e98e5e-1530-406c-96cc-7f2c8a7fabba] Running
	I0417 18:52:48.738910   96006 system_pods.go:89] "etcd-ha-467706-m03" [a79f9120-9c62-465e-8e06-97337c1eecd9] Running
	I0417 18:52:48.738914   96006 system_pods.go:89] "kindnet-5mvhn" [1d1c6ddb-22cf-489e-8958-41434cbf8b0c] Running
	I0417 18:52:48.738918   96006 system_pods.go:89] "kindnet-hspjv" [5ccc61fa-7766-431c-9f06-4fdfe455f551] Running
	I0417 18:52:48.738922   96006 system_pods.go:89] "kindnet-k6b9s" [a5bb2604-19fa-40f0-aaad-43ebf30e0cbb] Running
	I0417 18:52:48.738927   96006 system_pods.go:89] "kube-apiserver-ha-467706" [abcc8e4d-65cb-4284-a5c3-959035327f06] Running
	I0417 18:52:48.738931   96006 system_pods.go:89] "kube-apiserver-ha-467706-m02" [a011e338-074d-4af2-81b2-c3075782bc95] Running
	I0417 18:52:48.738935   96006 system_pods.go:89] "kube-apiserver-ha-467706-m03" [69a7a929-d717-4c2f-9cca-c067dcc7610d] Running
	I0417 18:52:48.738939   96006 system_pods.go:89] "kube-controller-manager-ha-467706" [27890bb6-f74a-4577-8815-deb93497a69c] Running
	I0417 18:52:48.738943   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m02" [abca2329-458d-45ec-b95b-ba181216bc46] Running
	I0417 18:52:48.738948   96006 system_pods.go:89] "kube-controller-manager-ha-467706-m03" [ae6eeeac-1ab7-4b22-8691-69d534d6d73e] Running
	I0417 18:52:48.738954   96006 system_pods.go:89] "kube-proxy-hd469" [ec70213c-82da-44af-a5ef-34157c4edc01] Running
	I0417 18:52:48.738958   96006 system_pods.go:89] "kube-proxy-jlcq7" [05590f74-8ea6-42ef-9d72-33e15cfd3a32] Running
	I0417 18:52:48.738965   96006 system_pods.go:89] "kube-proxy-qxtf4" [a28fd6ef-279c-49be-9282-4a6c7083c601] Running
	I0417 18:52:48.738969   96006 system_pods.go:89] "kube-scheduler-ha-467706" [682de84e-f6f1-4962-98c1-708fc1bcb473] Running
	I0417 18:52:48.738975   96006 system_pods.go:89] "kube-scheduler-ha-467706-m02" [c254711d-76f7-42b5-b8cc-4f31c91d1bae] Running
	I0417 18:52:48.738980   96006 system_pods.go:89] "kube-scheduler-ha-467706-m03" [94c0d749-a1da-468b-baee-25f5177376e5] Running
	I0417 18:52:48.738992   96006 system_pods.go:89] "kube-vip-ha-467706" [b92af6a0-34f7-4bdb-b0c3-e2821f4e693c] Running
	I0417 18:52:48.738995   96006 system_pods.go:89] "kube-vip-ha-467706-m02" [9285fce9-715f-46ab-9171-41ae5065ea13] Running
	I0417 18:52:48.738999   96006 system_pods.go:89] "kube-vip-ha-467706-m03" [75d15d4b-ed49-4d98-aecd-713bead1e281] Running
	I0417 18:52:48.739002   96006 system_pods.go:89] "storage-provisioner" [b5a737ba-33c0-4c0d-ab14-fe98f2c6e903] Running
	I0417 18:52:48.739011   96006 system_pods.go:126] duration metric: took 209.807818ms to wait for k8s-apps to be running ...
	I0417 18:52:48.739021   96006 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 18:52:48.739068   96006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:52:48.758130   96006 system_svc.go:56] duration metric: took 19.098207ms WaitForService to wait for kubelet
	I0417 18:52:48.758166   96006 kubeadm.go:576] duration metric: took 20.136683772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:52:48.758192   96006 node_conditions.go:102] verifying NodePressure condition ...
	I0417 18:52:48.925633   96006 request.go:629] Waited for 167.350815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I0417 18:52:48.925711   96006 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I0417 18:52:48.925717   96006 round_trippers.go:469] Request Headers:
	I0417 18:52:48.925740   96006 round_trippers.go:473]     Accept: application/json, */*
	I0417 18:52:48.925762   96006 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0417 18:52:48.930050   96006 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0417 18:52:48.931223   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931267   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931278   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931282   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931285   96006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 18:52:48.931288   96006 node_conditions.go:123] node cpu capacity is 2
	I0417 18:52:48.931292   96006 node_conditions.go:105] duration metric: took 173.095338ms to run NodePressure ...
	I0417 18:52:48.931304   96006 start.go:240] waiting for startup goroutines ...
	I0417 18:52:48.931341   96006 start.go:254] writing updated cluster config ...
	I0417 18:52:48.931624   96006 ssh_runner.go:195] Run: rm -f paused
	I0417 18:52:48.985138   96006 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 18:52:48.987244   96006 out.go:177] * Done! kubectl is now configured to use "ha-467706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.835474628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380231835447498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0aee9d16-b47a-4156-90bc-d8323303e95d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.836217191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e715a508-9313-4677-8c37-4fe93a229753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.836277932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e715a508-9313-4677-8c37-4fe93a229753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.836612188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e715a508-9313-4677-8c37-4fe93a229753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.882179328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ebd7ad5-daed-40e9-9ef0-7bbe75339f86 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.882548972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ebd7ad5-daed-40e9-9ef0-7bbe75339f86 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.883904074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c55dcb4e-3136-4752-91bb-9234fd7c13b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.884414686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380231884392733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c55dcb4e-3136-4752-91bb-9234fd7c13b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.884989814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adb6094d-6037-45a8-b11e-b0a88d1c06af name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.885047060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adb6094d-6037-45a8-b11e-b0a88d1c06af name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.885500648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adb6094d-6037-45a8-b11e-b0a88d1c06af name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.936499040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad92cf9a-2129-4490-a20f-5aebf05b303a name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.936635845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad92cf9a-2129-4490-a20f-5aebf05b303a name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.944964709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e50aaadd-0775-4da7-a1db-0da141d4255b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.945882460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380231945852980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e50aaadd-0775-4da7-a1db-0da141d4255b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.946514023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=304178ba-5bd7-498c-9869-dc22a4840fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.946566095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=304178ba-5bd7-498c-9869-dc22a4840fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.946838076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=304178ba-5bd7-498c-9869-dc22a4840fad name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.988508451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21cda65a-5ddc-4faf-bd7c-7e3f432bbf26 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.988692693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21cda65a-5ddc-4faf-bd7c-7e3f432bbf26 name=/runtime.v1.RuntimeService/Version
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.990780182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bedc3867-5dd2-4ac8-9b65-136ec6c39ccd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.991699739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380231991671613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bedc3867-5dd2-4ac8-9b65-136ec6c39ccd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.992941532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e8678ae-fec9-4c40-9abb-a06c0e405f8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.992994436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e8678ae-fec9-4c40-9abb-a06c0e405f8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 18:57:11 ha-467706 crio[683]: time="2024-04-17 18:57:11.993590096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713379972464206680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e69dba1da3a09443417dec6b066eff9a59f12bf747e8bd9782ec63c0141f6b,PodSandboxId:b3b3a885a73fc4348ee424eaec1dcb1583e0e30d6740438d7a49ba9665fd8bfa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713379820308051209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820245176087,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713379820252193082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7
db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d,PodSandboxId:a4a14e5274f052a23b3343825e9cd999cca173c2d5c9157ca3dadbb01c59e890,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17133798
18001748637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713379817830885641,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b,PodSandboxId:209f90ed9f3f5331a9d6a950ad6f8fd78700123b0eec63defd46bd082ae3b1da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713379801424337361,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d73414bb22a378bd54afe8a51fdffd5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713379798099381628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f,PodSandboxId:2d8b6f55b0eabcbd40b5315040c12b666ddb8ded0194ab9a6bd643bec96f6430,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713379798071709810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e,PodSandboxId:844fdf54706b9d0da9d8977d1435d0d8891e140577322d205e1b5767397fcf4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713379798054701454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713379797955002026,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e8678ae-fec9-4c40-9abb-a06c0e405f8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	93e18e5085cb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1b57101a3681c       busybox-fc5497c4f-r65s7
	23e69dba1da3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b3b3a885a73fc       storage-provisioner
	143bf06c19825       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0836a6cd9f827       coredns-7db6d8ff4d-kcdqn
	56dd0755cda79       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   2887673f339d8       coredns-7db6d8ff4d-56dz8
	2f2ed526ef2f9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   a4a14e5274f05       kindnet-hspjv
	fe8aab67cc372       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      6 minutes ago       Running             kube-proxy                0                   269ac099b43b4       kube-proxy-hd469
	c2e7dc14e0398       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   209f90ed9f3f5       kube-vip-ha-467706
	0b4b6b19cdcea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   167b41d6ec7a7       etcd-ha-467706
	d1e96d91894cf       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      7 minutes ago       Running             kube-apiserver            0                   2d8b6f55b0eab       kube-apiserver-ha-467706
	644754e2725b2       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      7 minutes ago       Running             kube-controller-manager   0                   844fdf54706b9       kube-controller-manager-ha-467706
	7f539c70ed4df       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      7 minutes ago       Running             kube-scheduler            0                   18f84a94ee364       kube-scheduler-ha-467706
	
	
	==> coredns [143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0] <==
	[INFO] 10.244.2.2:44486 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000369814s
	[INFO] 10.244.2.2:33799 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190447s
	[INFO] 10.244.0.4:52709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115254s
	[INFO] 10.244.0.4:45280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018129s
	[INFO] 10.244.0.4:55894 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001481686s
	[INFO] 10.244.0.4:41971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086394s
	[INFO] 10.244.1.2:45052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204533s
	[INFO] 10.244.1.2:56976 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00191173s
	[INFO] 10.244.1.2:48269 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205618s
	[INFO] 10.244.1.2:41050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145556s
	[INFO] 10.244.1.2:40399 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367129s
	[INFO] 10.244.1.2:34908 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024876s
	[INFO] 10.244.1.2:33490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115098s
	[INFO] 10.244.1.2:43721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162828s
	[INFO] 10.244.2.2:52076 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338786s
	[INFO] 10.244.0.4:58146 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084273s
	[INFO] 10.244.0.4:46620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011163s
	[INFO] 10.244.1.2:55749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161622s
	[INFO] 10.244.1.2:50475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112723s
	[INFO] 10.244.2.2:58296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123831s
	[INFO] 10.244.2.2:42756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149112s
	[INFO] 10.244.2.2:44779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135979s
	[INFO] 10.244.0.4:32859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254227s
	[INFO] 10.244.0.4:39694 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091483s
	[INFO] 10.244.1.2:48582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162571s
	
	
	==> coredns [56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230] <==
	[INFO] 10.244.1.2:40690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216989s
	[INFO] 10.244.1.2:51761 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001998901s
	[INFO] 10.244.2.2:55936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277709s
	[INFO] 10.244.2.2:59321 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021403442s
	[INFO] 10.244.2.2:33112 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000361777s
	[INFO] 10.244.2.2:44063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012356307s
	[INFO] 10.244.2.2:52058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226126s
	[INFO] 10.244.2.2:45346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192986s
	[INFO] 10.244.0.4:42980 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001884522s
	[INFO] 10.244.0.4:33643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177169s
	[INFO] 10.244.0.4:55640 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105826s
	[INFO] 10.244.0.4:54019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112453s
	[INFO] 10.244.2.2:41133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144651s
	[INFO] 10.244.2.2:59362 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099749s
	[INFO] 10.244.2.2:32859 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102166s
	[INFO] 10.244.0.4:33356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105006s
	[INFO] 10.244.0.4:56803 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133644s
	[INFO] 10.244.1.2:34244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104241s
	[INFO] 10.244.1.2:43628 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148576s
	[INFO] 10.244.2.2:50718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000190649s
	[INFO] 10.244.0.4:44677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013354s
	[INFO] 10.244.0.4:45227 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159231s
	[INFO] 10.244.1.2:46121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135561s
	[INFO] 10.244.1.2:43459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116038s
	[INFO] 10.244.1.2:34953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088316s
	
	
	==> describe nodes <==
	Name:               ha-467706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:57:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:53:08 +0000   Wed, 17 Apr 2024 18:50:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    ha-467706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3208cc9eadd3453fab86398575c87f4f
	  System UUID:                3208cc9e-add3-453f-ab86-398575c87f4f
	  Boot ID:                    142d9103-8e77-48a0-a260-5d3c6e2e5842
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r65s7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 coredns-7db6d8ff4d-56dz8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 coredns-7db6d8ff4d-kcdqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 etcd-ha-467706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m8s
	  kube-system                 kindnet-hspjv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m56s
	  kube-system                 kube-apiserver-ha-467706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-controller-manager-ha-467706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-proxy-hd469                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 kube-scheduler-ha-467706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-vip-ha-467706                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m54s  kube-proxy       
	  Normal  Starting                 7m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m8s   kubelet          Node ha-467706 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s   kubelet          Node ha-467706 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s   kubelet          Node ha-467706 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m56s  node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal  NodeReady                6m53s  kubelet          Node ha-467706 status is now: NodeReady
	  Normal  RegisteredNode           5m43s  node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal  RegisteredNode           4m29s  node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	
	
	Name:               ha-467706-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:51:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:53:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Apr 2024 18:53:14 +0000   Wed, 17 Apr 2024 18:54:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.236
	  Hostname:    ha-467706-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f49a89e0b0d7432fa507fd1ad108778d
	  System UUID:                f49a89e0-b0d7-432f-a507-fd1ad108778d
	  Boot ID:                    a312ddbf-6416-4cd3-b83f-4a865cbb9daf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xg855                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-467706-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-k6b9s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-467706-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-controller-manager-ha-467706-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-qxtf4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-467706-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-vip-ha-467706-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m56s                node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           5m43s                node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           4m29s                node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  NodeNotReady             2m33s                node-controller  Node ha-467706-m02 status is now: NodeNotReady
	
	
	Name:               ha-467706-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_52_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:52:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:57:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:52:55 +0000   Wed, 17 Apr 2024 18:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-467706-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f9c357ef5b24ca6b2e9c8c989ff32f8
	  System UUID:                6f9c357e-f5b2-4ca6-b2e9-c8c989ff32f8
	  Boot ID:                    920b7140-01bb-49d7-ab98-44319db0cc1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzsn2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-467706-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kindnet-5mvhn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m47s
	  kube-system                 kube-apiserver-ha-467706-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-ha-467706-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-proxy-jlcq7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-scheduler-ha-467706-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-vip-ha-467706-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-467706-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	
	
	Name:               ha-467706-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_53_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:53:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:53:55 +0000   Wed, 17 Apr 2024 18:53:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-467706-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd00fe12f3b54ed0af7c6ee4cc75cc20
	  System UUID:                dd00fe12-f3b5-4ed0-af7c-6ee4cc75cc20
	  Boot ID:                    af4f4186-bd72-4c86-9e7c-b804dc030414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-v8r5k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m48s
	  kube-system                 kube-proxy-c7znr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m48s (x2 over 3m48s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x2 over 3m48s)  kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x2 over 3m48s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node ha-467706-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr17 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053092] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.561690] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796176] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.989146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.404684] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.063801] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068254] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163928] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151596] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.299739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.466528] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062588] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.060993] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.990848] kauditd_printk_skb: 62 callbacks suppressed
	[Apr17 18:50] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.084981] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.627165] kauditd_printk_skb: 21 callbacks suppressed
	[Apr17 18:51] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04] <==
	{"level":"warn","ts":"2024-04-17T18:57:12.284358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.293965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.298675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.328861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.329871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.330776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.341777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.349561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.353612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.357404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.366205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.373735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.381081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.381308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.38681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.390786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.398898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.407264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.416065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.420784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.425816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.438449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.446842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.456307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-17T18:57:12.481377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f0ef8018a32f46af","from":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:57:12 up 7 min,  0 users,  load average: 0.42, 0.18, 0.08
	Linux ha-467706 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d] <==
	I0417 18:56:39.624634       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:56:49.640707       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:56:49.640754       1 main.go:227] handling current node
	I0417 18:56:49.640766       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:56:49.640772       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:56:49.640873       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:56:49.640904       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:56:49.640954       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:56:49.640983       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:56:59.654432       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:56:59.654596       1 main.go:227] handling current node
	I0417 18:56:59.654630       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:56:59.654717       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:56:59.654884       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:56:59.654937       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:56:59.655053       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:56:59.655159       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 18:57:09.670962       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 18:57:09.671055       1 main.go:227] handling current node
	I0417 18:57:09.671263       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 18:57:09.671300       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 18:57:09.671515       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 18:57:09.671547       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 18:57:09.671611       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 18:57:09.671616       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f] <==
	I0417 18:50:04.318202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 18:50:04.355194       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0417 18:50:04.378610       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 18:50:16.846627       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0417 18:50:16.894909       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0417 18:52:26.096584       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0417 18:52:26.096677       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0417 18:52:26.096707       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.196µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0417 18:52:26.098071       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0417 18:52:26.098305       1 timeout.go:142] post-timeout activity - time-elapsed: 1.438942ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0417 18:52:53.633381       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32846: use of closed network connection
	E0417 18:52:53.851620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55360: use of closed network connection
	E0417 18:52:54.066049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55380: use of closed network connection
	E0417 18:52:54.312073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55394: use of closed network connection
	E0417 18:52:54.530310       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0417 18:52:54.749321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55418: use of closed network connection
	E0417 18:52:54.970072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55426: use of closed network connection
	E0417 18:52:55.170376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55456: use of closed network connection
	E0417 18:52:55.363232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55474: use of closed network connection
	E0417 18:52:55.704964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55496: use of closed network connection
	E0417 18:52:55.917674       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55522: use of closed network connection
	E0417 18:52:56.149324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55548: use of closed network connection
	E0417 18:52:56.358780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55554: use of closed network connection
	E0417 18:52:56.573732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55562: use of closed network connection
	W0417 18:54:12.608672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.250]
	
	
	==> kube-controller-manager [644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e] <==
	I0417 18:51:16.104067       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m02"
	I0417 18:52:25.246945       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-467706-m03\" does not exist"
	I0417 18:52:25.263499       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-467706-m03" podCIDRs=["10.244.2.0/24"]
	I0417 18:52:26.133648       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m03"
	I0417 18:52:49.983995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.444635ms"
	I0417 18:52:50.033279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.02233ms"
	I0417 18:52:50.161420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.074688ms"
	I0417 18:52:50.383994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.50029ms"
	I0417 18:52:50.433875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.821909ms"
	I0417 18:52:50.434037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.115µs"
	I0417 18:52:51.508597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.721µs"
	I0417 18:52:52.536956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.409602ms"
	I0417 18:52:52.537269       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.901µs"
	I0417 18:52:53.029529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.842838ms"
	I0417 18:52:53.029630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.918µs"
	I0417 18:52:53.104812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.029182ms"
	I0417 18:52:53.105468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.414µs"
	E0417 18:53:24.007584       1 certificate_controller.go:146] Sync csr-gt8sv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gt8sv": the object has been modified; please apply your changes to the latest version and try again
	I0417 18:53:24.279715       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-467706-m04\" does not exist"
	I0417 18:53:24.315391       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-467706-m04" podCIDRs=["10.244.3.0/24"]
	I0417 18:53:26.172529       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-467706-m04"
	I0417 18:53:34.662934       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-467706-m04"
	I0417 18:54:39.012020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-467706-m04"
	I0417 18:54:39.148264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.816388ms"
	I0417 18:54:39.148404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.341µs"
	
	
	==> kube-proxy [fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1] <==
	I0417 18:50:18.091985       1 server_linux.go:69] "Using iptables proxy"
	I0417 18:50:18.123753       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0417 18:50:18.174594       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 18:50:18.174693       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 18:50:18.174745       1 server_linux.go:165] "Using iptables Proxier"
	I0417 18:50:18.177785       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 18:50:18.178232       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 18:50:18.178431       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 18:50:18.179602       1 config.go:192] "Starting service config controller"
	I0417 18:50:18.179652       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 18:50:18.179690       1 config.go:101] "Starting endpoint slice config controller"
	I0417 18:50:18.179706       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 18:50:18.180692       1 config.go:319] "Starting node config controller"
	I0417 18:50:18.180731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 18:50:18.279797       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 18:50:18.279873       1 shared_informer.go:320] Caches are synced for service config
	I0417 18:50:18.281317       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c] <==
	W0417 18:50:01.055262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 18:50:01.058446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 18:50:01.902822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 18:50:01.902927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 18:50:01.966916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:50:01.967036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 18:50:01.967229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 18:50:01.967646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 18:50:02.044397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 18:50:02.045012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 18:50:02.094905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0417 18:50:02.096485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0417 18:50:02.376262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 18:50:02.376346       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0417 18:50:04.574514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0417 18:52:25.323167       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5mvhn\": pod kindnet-5mvhn is already assigned to node \"ha-467706-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-5mvhn" node="ha-467706-m03"
	E0417 18:52:25.323308       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1d1c6ddb-22cf-489e-8958-41434cbf8b0c(kube-system/kindnet-5mvhn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5mvhn"
	E0417 18:52:25.323334       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5mvhn\": pod kindnet-5mvhn is already assigned to node \"ha-467706-m03\"" pod="kube-system/kindnet-5mvhn"
	I0417 18:52:25.323383       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5mvhn" node="ha-467706-m03"
	E0417 18:52:25.324325       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jlcq7\": pod kube-proxy-jlcq7 is already assigned to node \"ha-467706-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jlcq7" node="ha-467706-m03"
	E0417 18:52:25.324409       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 05590f74-8ea6-42ef-9d72-33e15cfd3a32(kube-system/kube-proxy-jlcq7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jlcq7"
	E0417 18:52:25.324432       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jlcq7\": pod kube-proxy-jlcq7 is already assigned to node \"ha-467706-m03\"" pod="kube-system/kube-proxy-jlcq7"
	I0417 18:52:25.324451       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jlcq7" node="ha-467706-m03"
	E0417 18:53:24.402687       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wth9x\": pod kindnet-wth9x is already assigned to node \"ha-467706-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wth9x" node="ha-467706-m04"
	E0417 18:53:24.402914       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wth9x\": pod kindnet-wth9x is already assigned to node \"ha-467706-m04\"" pod="kube-system/kindnet-wth9x"
	
	
	==> kubelet <==
	Apr 17 18:53:04 ha-467706 kubelet[1377]: E0417 18:53:04.307805    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:53:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:53:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:54:04 ha-467706 kubelet[1377]: E0417 18:54:04.306427    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:54:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:54:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:55:04 ha-467706 kubelet[1377]: E0417 18:55:04.305151    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:55:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:55:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:56:04 ha-467706 kubelet[1377]: E0417 18:56:04.307844    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:56:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:56:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:57:04 ha-467706 kubelet[1377]: E0417 18:57:04.304401    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:57:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:57:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:57:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:57:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-467706 -n ha-467706
helpers_test.go:261: (dbg) Run:  kubectl --context ha-467706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-467706 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-467706 -v=7 --alsologtostderr
E0417 18:58:19.319368   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:58:47.004251   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-467706 -v=7 --alsologtostderr: exit status 82 (2m2.065636525s)

                                                
                                                
-- stdout --
	* Stopping node "ha-467706-m04"  ...
	* Stopping node "ha-467706-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:57:14.087239  101275 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:57:14.087805  101275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:57:14.087864  101275 out.go:304] Setting ErrFile to fd 2...
	I0417 18:57:14.087883  101275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:57:14.088381  101275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:57:14.089094  101275 out.go:298] Setting JSON to false
	I0417 18:57:14.089224  101275 mustload.go:65] Loading cluster: ha-467706
	I0417 18:57:14.089592  101275 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:57:14.089686  101275 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:57:14.089862  101275 mustload.go:65] Loading cluster: ha-467706
	I0417 18:57:14.089997  101275 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:57:14.090026  101275 stop.go:39] StopHost: ha-467706-m04
	I0417 18:57:14.090399  101275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:14.090460  101275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:14.106040  101275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0417 18:57:14.106679  101275 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:14.107324  101275 main.go:141] libmachine: Using API Version  1
	I0417 18:57:14.107354  101275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:14.107742  101275 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:14.110588  101275 out.go:177] * Stopping node "ha-467706-m04"  ...
	I0417 18:57:14.112226  101275 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0417 18:57:14.112272  101275 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 18:57:14.112628  101275 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0417 18:57:14.112660  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 18:57:14.116202  101275 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:14.116676  101275 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:53:12 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 18:57:14.116704  101275 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 18:57:14.116959  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 18:57:14.117181  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 18:57:14.117353  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 18:57:14.117522  101275 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 18:57:14.201693  101275 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0417 18:57:14.259008  101275 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0417 18:57:14.315044  101275 main.go:141] libmachine: Stopping "ha-467706-m04"...
	I0417 18:57:14.315105  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:57:14.316592  101275 main.go:141] libmachine: (ha-467706-m04) Calling .Stop
	I0417 18:57:14.320422  101275 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 0/120
	I0417 18:57:15.658876  101275 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 18:57:15.660184  101275 main.go:141] libmachine: Machine "ha-467706-m04" was stopped.
	I0417 18:57:15.660211  101275 stop.go:75] duration metric: took 1.547985571s to stop
	I0417 18:57:15.660234  101275 stop.go:39] StopHost: ha-467706-m03
	I0417 18:57:15.660626  101275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:57:15.660692  101275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:57:15.676625  101275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I0417 18:57:15.677114  101275 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:57:15.677674  101275 main.go:141] libmachine: Using API Version  1
	I0417 18:57:15.677698  101275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:57:15.678075  101275 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:57:15.681048  101275 out.go:177] * Stopping node "ha-467706-m03"  ...
	I0417 18:57:15.682338  101275 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0417 18:57:15.682360  101275 main.go:141] libmachine: (ha-467706-m03) Calling .DriverName
	I0417 18:57:15.682615  101275 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0417 18:57:15.682650  101275 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHHostname
	I0417 18:57:15.685659  101275 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:15.686121  101275 main.go:141] libmachine: (ha-467706-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:9e:a9", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:51:49 +0000 UTC Type:0 Mac:52:54:00:93:9e:a9 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-467706-m03 Clientid:01:52:54:00:93:9e:a9}
	I0417 18:57:15.686161  101275 main.go:141] libmachine: (ha-467706-m03) DBG | domain ha-467706-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:93:9e:a9 in network mk-ha-467706
	I0417 18:57:15.686338  101275 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHPort
	I0417 18:57:15.686512  101275 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHKeyPath
	I0417 18:57:15.686672  101275 main.go:141] libmachine: (ha-467706-m03) Calling .GetSSHUsername
	I0417 18:57:15.686831  101275 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m03/id_rsa Username:docker}
	I0417 18:57:15.774768  101275 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0417 18:57:15.829757  101275 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0417 18:57:15.886758  101275 main.go:141] libmachine: Stopping "ha-467706-m03"...
	I0417 18:57:15.886812  101275 main.go:141] libmachine: (ha-467706-m03) Calling .GetState
	I0417 18:57:15.888466  101275 main.go:141] libmachine: (ha-467706-m03) Calling .Stop
	I0417 18:57:15.891924  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 0/120
	I0417 18:57:16.893437  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 1/120
	I0417 18:57:17.894939  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 2/120
	I0417 18:57:18.896495  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 3/120
	I0417 18:57:19.897990  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 4/120
	I0417 18:57:20.900178  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 5/120
	I0417 18:57:21.901910  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 6/120
	I0417 18:57:22.903453  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 7/120
	I0417 18:57:23.905086  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 8/120
	I0417 18:57:24.906676  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 9/120
	I0417 18:57:25.908101  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 10/120
	I0417 18:57:26.909484  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 11/120
	I0417 18:57:27.911583  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 12/120
	I0417 18:57:28.913121  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 13/120
	I0417 18:57:29.915640  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 14/120
	I0417 18:57:30.917613  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 15/120
	I0417 18:57:31.919112  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 16/120
	I0417 18:57:32.920735  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 17/120
	I0417 18:57:33.922230  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 18/120
	I0417 18:57:34.924684  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 19/120
	I0417 18:57:35.926518  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 20/120
	I0417 18:57:36.928104  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 21/120
	I0417 18:57:37.929581  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 22/120
	I0417 18:57:38.931402  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 23/120
	I0417 18:57:39.932868  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 24/120
	I0417 18:57:40.935029  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 25/120
	I0417 18:57:41.936865  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 26/120
	I0417 18:57:42.938521  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 27/120
	I0417 18:57:43.939997  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 28/120
	I0417 18:57:44.941610  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 29/120
	I0417 18:57:45.943668  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 30/120
	I0417 18:57:46.945377  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 31/120
	I0417 18:57:47.946956  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 32/120
	I0417 18:57:48.948523  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 33/120
	I0417 18:57:49.950501  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 34/120
	I0417 18:57:50.952442  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 35/120
	I0417 18:57:51.954243  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 36/120
	I0417 18:57:52.955528  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 37/120
	I0417 18:57:53.957126  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 38/120
	I0417 18:57:54.958530  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 39/120
	I0417 18:57:55.960527  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 40/120
	I0417 18:57:56.961877  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 41/120
	I0417 18:57:57.963310  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 42/120
	I0417 18:57:58.964591  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 43/120
	I0417 18:57:59.966059  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 44/120
	I0417 18:58:00.967945  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 45/120
	I0417 18:58:01.970376  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 46/120
	I0417 18:58:02.971887  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 47/120
	I0417 18:58:03.974016  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 48/120
	I0417 18:58:04.975434  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 49/120
	I0417 18:58:05.977428  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 50/120
	I0417 18:58:06.979347  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 51/120
	I0417 18:58:07.980867  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 52/120
	I0417 18:58:08.982341  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 53/120
	I0417 18:58:09.983741  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 54/120
	I0417 18:58:10.985606  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 55/120
	I0417 18:58:11.986959  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 56/120
	I0417 18:58:12.988387  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 57/120
	I0417 18:58:13.989856  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 58/120
	I0417 18:58:14.991221  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 59/120
	I0417 18:58:15.993194  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 60/120
	I0417 18:58:16.994598  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 61/120
	I0417 18:58:17.996064  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 62/120
	I0417 18:58:18.997512  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 63/120
	I0417 18:58:19.999073  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 64/120
	I0417 18:58:21.000932  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 65/120
	I0417 18:58:22.002477  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 66/120
	I0417 18:58:23.004302  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 67/120
	I0417 18:58:24.005875  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 68/120
	I0417 18:58:25.007498  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 69/120
	I0417 18:58:26.009569  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 70/120
	I0417 18:58:27.010790  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 71/120
	I0417 18:58:28.012064  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 72/120
	I0417 18:58:29.013369  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 73/120
	I0417 18:58:30.015009  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 74/120
	I0417 18:58:31.016578  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 75/120
	I0417 18:58:32.018191  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 76/120
	I0417 18:58:33.019701  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 77/120
	I0417 18:58:34.021351  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 78/120
	I0417 18:58:35.022949  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 79/120
	I0417 18:58:36.025067  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 80/120
	I0417 18:58:37.027233  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 81/120
	I0417 18:58:38.028800  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 82/120
	I0417 18:58:39.030203  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 83/120
	I0417 18:58:40.031551  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 84/120
	I0417 18:58:41.033468  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 85/120
	I0417 18:58:42.035546  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 86/120
	I0417 18:58:43.037105  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 87/120
	I0417 18:58:44.039350  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 88/120
	I0417 18:58:45.040750  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 89/120
	I0417 18:58:46.042868  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 90/120
	I0417 18:58:47.044336  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 91/120
	I0417 18:58:48.045689  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 92/120
	I0417 18:58:49.047055  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 93/120
	I0417 18:58:50.048408  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 94/120
	I0417 18:58:51.050145  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 95/120
	I0417 18:58:52.052296  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 96/120
	I0417 18:58:53.053605  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 97/120
	I0417 18:58:54.055020  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 98/120
	I0417 18:58:55.056436  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 99/120
	I0417 18:58:56.058301  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 100/120
	I0417 18:58:57.059681  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 101/120
	I0417 18:58:58.061027  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 102/120
	I0417 18:58:59.062431  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 103/120
	I0417 18:59:00.063979  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 104/120
	I0417 18:59:01.065885  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 105/120
	I0417 18:59:02.067338  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 106/120
	I0417 18:59:03.068959  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 107/120
	I0417 18:59:04.070383  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 108/120
	I0417 18:59:05.071715  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 109/120
	I0417 18:59:06.073223  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 110/120
	I0417 18:59:07.074822  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 111/120
	I0417 18:59:08.076334  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 112/120
	I0417 18:59:09.077725  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 113/120
	I0417 18:59:10.079073  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 114/120
	I0417 18:59:11.080900  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 115/120
	I0417 18:59:12.082390  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 116/120
	I0417 18:59:13.083621  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 117/120
	I0417 18:59:14.084942  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 118/120
	I0417 18:59:15.086543  101275 main.go:141] libmachine: (ha-467706-m03) Waiting for machine to stop 119/120
	I0417 18:59:16.087572  101275 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0417 18:59:16.087675  101275 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0417 18:59:16.090108  101275 out.go:177] 
	W0417 18:59:16.091815  101275 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0417 18:59:16.091840  101275 out.go:239] * 
	* 
	W0417 18:59:16.095012  101275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0417 18:59:16.096463  101275 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-467706 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-467706 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-467706 --wait=true -v=7 --alsologtostderr: (4m1.948552268s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-467706
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-467706 -n ha-467706
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 logs -n 25
E0417 19:03:19.319657   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-467706 logs -n 25: (1.960607s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m04 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp testdata/cp-test.txt                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m04_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03:/home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m03 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-467706 node stop m02 -v=7                                                     | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-467706 node start m02 -v=7                                                    | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-467706 -v=7                                                           | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-467706 -v=7                                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-467706 --wait=true -v=7                                                    | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:59 UTC | 17 Apr 24 19:03 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-467706                                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:03 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 18:59:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 18:59:16.161168  101637 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:59:16.161416  101637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:59:16.161424  101637 out.go:304] Setting ErrFile to fd 2...
	I0417 18:59:16.161429  101637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:59:16.161600  101637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:59:16.162161  101637 out.go:298] Setting JSON to false
	I0417 18:59:16.163042  101637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9704,"bootTime":1713370652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:59:16.163108  101637 start.go:139] virtualization: kvm guest
	I0417 18:59:16.165441  101637 out.go:177] * [ha-467706] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:59:16.167040  101637 notify.go:220] Checking for updates...
	I0417 18:59:16.167048  101637 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:59:16.168655  101637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:59:16.170321  101637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:59:16.171696  101637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:59:16.173643  101637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:59:16.174954  101637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:59:16.176897  101637 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:59:16.177114  101637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:59:16.177714  101637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:59:16.177770  101637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:59:16.193529  101637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0417 18:59:16.193931  101637 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:59:16.194519  101637 main.go:141] libmachine: Using API Version  1
	I0417 18:59:16.194543  101637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:59:16.194921  101637 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:59:16.195101  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.232622  101637 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 18:59:16.234260  101637 start.go:297] selected driver: kvm2
	I0417 18:59:16.234281  101637 start.go:901] validating driver "kvm2" against &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:59:16.234478  101637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:59:16.234851  101637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:59:16.234957  101637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 18:59:16.250785  101637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 18:59:16.251868  101637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:59:16.251980  101637 cni.go:84] Creating CNI manager for ""
	I0417 18:59:16.251992  101637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0417 18:59:16.252120  101637 start.go:340] cluster config:
	{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.1
68.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:59:16.252414  101637 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:59:16.254491  101637 out.go:177] * Starting "ha-467706" primary control-plane node in "ha-467706" cluster
	I0417 18:59:16.255875  101637 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:59:16.255932  101637 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 18:59:16.255949  101637 cache.go:56] Caching tarball of preloaded images
	I0417 18:59:16.256029  101637 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:59:16.256042  101637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:59:16.256175  101637 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:59:16.256404  101637 start.go:360] acquireMachinesLock for ha-467706: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:59:16.256465  101637 start.go:364] duration metric: took 37.118µs to acquireMachinesLock for "ha-467706"
	I0417 18:59:16.256487  101637 start.go:96] Skipping create...Using existing machine configuration
	I0417 18:59:16.256495  101637 fix.go:54] fixHost starting: 
	I0417 18:59:16.256806  101637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:59:16.256838  101637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:59:16.271803  101637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0417 18:59:16.272269  101637 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:59:16.272864  101637 main.go:141] libmachine: Using API Version  1
	I0417 18:59:16.272893  101637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:59:16.273208  101637 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:59:16.273421  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.273596  101637 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:59:16.275481  101637 fix.go:112] recreateIfNeeded on ha-467706: state=Running err=<nil>
	W0417 18:59:16.275501  101637 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 18:59:16.278558  101637 out.go:177] * Updating the running kvm2 "ha-467706" VM ...
	I0417 18:59:16.280125  101637 machine.go:94] provisionDockerMachine start ...
	I0417 18:59:16.280152  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.280444  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.283206  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.283733  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.283761  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.283954  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.284136  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.284291  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.284389  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.284586  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.284846  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.284860  101637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 18:59:16.391588  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:59:16.391618  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.391897  101637 buildroot.go:166] provisioning hostname "ha-467706"
	I0417 18:59:16.391929  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.392134  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.395071  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.395519  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.395545  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.395761  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.395959  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.396197  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.396351  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.396533  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.396719  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.396734  101637 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706 && echo "ha-467706" | sudo tee /etc/hostname
	I0417 18:59:16.521926  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:59:16.521959  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.524731  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.525161  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.525194  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.525387  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.525588  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.525765  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.525903  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.526064  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.526287  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.526312  101637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:59:16.630177  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:59:16.630223  101637 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:59:16.630249  101637 buildroot.go:174] setting up certificates
	I0417 18:59:16.630263  101637 provision.go:84] configureAuth start
	I0417 18:59:16.630278  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.630606  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:59:16.633428  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.633805  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.633826  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.634000  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.636292  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.636630  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.636659  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.636789  101637 provision.go:143] copyHostCerts
	I0417 18:59:16.636819  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:59:16.636855  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:59:16.636875  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:59:16.636940  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:59:16.637032  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:59:16.637050  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:59:16.637061  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:59:16.637085  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:59:16.637125  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:59:16.637141  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:59:16.637147  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:59:16.637166  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:59:16.637213  101637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706 san=[127.0.0.1 192.168.39.159 ha-467706 localhost minikube]
	I0417 18:59:16.942977  101637 provision.go:177] copyRemoteCerts
	I0417 18:59:16.943041  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:59:16.943068  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.945848  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.946244  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.946268  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.946443  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.946660  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.946824  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.946973  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:59:17.032860  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:59:17.032932  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:59:17.059369  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:59:17.059443  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0417 18:59:17.085939  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:59:17.086012  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:59:17.112322  101637 provision.go:87] duration metric: took 482.044876ms to configureAuth
	I0417 18:59:17.112355  101637 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:59:17.112607  101637 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:59:17.112702  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:17.115466  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:17.115908  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:17.115945  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:17.116148  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:17.116355  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:17.116531  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:17.116654  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:17.116812  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:17.116988  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:17.117004  101637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:00:48.097917  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:00:48.097950  101637 machine.go:97] duration metric: took 1m31.817811387s to provisionDockerMachine
	I0417 19:00:48.097990  101637 start.go:293] postStartSetup for "ha-467706" (driver="kvm2")
	I0417 19:00:48.098010  101637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:00:48.098064  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.098382  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:00:48.098416  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.102058  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.102679  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.102718  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.102854  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.103073  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.103313  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.103520  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.185234  101637 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:00:48.189816  101637 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:00:48.189848  101637 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:00:48.189922  101637 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:00:48.190019  101637 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:00:48.190034  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 19:00:48.190146  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:00:48.201727  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:00:48.227988  101637 start.go:296] duration metric: took 129.978773ms for postStartSetup
	I0417 19:00:48.228038  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.228433  101637 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0417 19:00:48.228464  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.231137  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.231634  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.231663  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.231820  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.232076  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.232243  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.232408  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	W0417 19:00:48.312448  101637 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0417 19:00:48.312477  101637 fix.go:56] duration metric: took 1m32.055983625s for fixHost
	I0417 19:00:48.312499  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.315427  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.315841  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.315865  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.316091  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.316300  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.316497  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.316642  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.316815  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 19:00:48.317030  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 19:00:48.317044  101637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 19:00:48.413977  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713380448.378851145
	
	I0417 19:00:48.414001  101637 fix.go:216] guest clock: 1713380448.378851145
	I0417 19:00:48.414008  101637 fix.go:229] Guest: 2024-04-17 19:00:48.378851145 +0000 UTC Remote: 2024-04-17 19:00:48.312483497 +0000 UTC m=+92.204953272 (delta=66.367648ms)
	I0417 19:00:48.414028  101637 fix.go:200] guest clock delta is within tolerance: 66.367648ms
	I0417 19:00:48.414034  101637 start.go:83] releasing machines lock for "ha-467706", held for 1m32.157556446s
	I0417 19:00:48.414051  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.414333  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 19:00:48.417074  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.417452  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.417484  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.417605  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418120  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418282  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418378  101637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:00:48.418444  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.418563  101637 ssh_runner.go:195] Run: cat /version.json
	I0417 19:00:48.418592  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.421174  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421538  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421720  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.421742  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421933  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.422020  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.422044  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.422108  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.422254  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.422291  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.422450  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.422449  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.422607  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.422768  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.518666  101637 ssh_runner.go:195] Run: systemctl --version
	I0417 19:00:48.525009  101637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:00:48.690941  101637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 19:00:48.697368  101637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:00:48.697454  101637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:00:48.708545  101637 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0417 19:00:48.708585  101637 start.go:494] detecting cgroup driver to use...
	I0417 19:00:48.708683  101637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:00:48.728162  101637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:00:48.743836  101637 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:00:48.743904  101637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:00:48.758958  101637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:00:48.773887  101637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:00:48.927506  101637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:00:49.080809  101637 docker.go:233] disabling docker service ...
	I0417 19:00:49.080910  101637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:00:49.098998  101637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:00:49.114271  101637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:00:49.268380  101637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:00:49.417855  101637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:00:49.432753  101637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:00:49.454659  101637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:00:49.454748  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.466397  101637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:00:49.466465  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.477794  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.489404  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.501053  101637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:00:49.514720  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.526494  101637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.540317  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.551931  101637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:00:49.562342  101637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:00:49.572515  101637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:00:49.729924  101637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:00:50.043377  101637 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:00:50.043450  101637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:00:50.048656  101637 start.go:562] Will wait 60s for crictl version
	I0417 19:00:50.048712  101637 ssh_runner.go:195] Run: which crictl
	I0417 19:00:50.053039  101637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:00:50.103281  101637 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:00:50.103413  101637 ssh_runner.go:195] Run: crio --version
	I0417 19:00:50.134323  101637 ssh_runner.go:195] Run: crio --version
	I0417 19:00:50.168920  101637 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 19:00:50.170840  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 19:00:50.173677  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:50.174095  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:50.174125  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:50.174371  101637 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:00:50.179425  101637 kubeadm.go:877] updating cluster {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc
.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:00:50.179572  101637 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:00:50.179655  101637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:00:50.228025  101637 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:00:50.228053  101637 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:00:50.228132  101637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:00:50.280514  101637 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:00:50.280543  101637 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:00:50.280552  101637 kubeadm.go:928] updating node { 192.168.39.159 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:00:50.280650  101637 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:00:50.280749  101637 ssh_runner.go:195] Run: crio config
	I0417 19:00:50.329894  101637 cni.go:84] Creating CNI manager for ""
	I0417 19:00:50.329924  101637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0417 19:00:50.329938  101637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:00:50.329979  101637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-467706 NodeName:ha-467706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:00:50.330119  101637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-467706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:00:50.330139  101637 kube-vip.go:111] generating kube-vip config ...
	I0417 19:00:50.330178  101637 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 19:00:50.343385  101637 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 19:00:50.343507  101637 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 19:00:50.343566  101637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:00:50.354586  101637 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:00:50.354664  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0417 19:00:50.365419  101637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0417 19:00:50.383471  101637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:00:50.400939  101637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0417 19:00:50.418982  101637 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 19:00:50.437425  101637 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 19:00:50.442580  101637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:00:50.593051  101637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:00:50.608721  101637 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.159
	I0417 19:00:50.608747  101637 certs.go:194] generating shared ca certs ...
	I0417 19:00:50.608763  101637 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.608946  101637 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:00:50.609008  101637 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:00:50.609019  101637 certs.go:256] generating profile certs ...
	I0417 19:00:50.609279  101637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 19:00:50.609317  101637 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643
	I0417 19:00:50.609339  101637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.250 192.168.39.254]
	I0417 19:00:50.867791  101637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 ...
	I0417 19:00:50.867824  101637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643: {Name:mk5b418360c8bcc8349e4f44b04836a80d7be0aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.868030  101637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643 ...
	I0417 19:00:50.868052  101637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643: {Name:mk3b23f1d93c687c18fca647217d3523e9ea468d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.868165  101637 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 19:00:50.868390  101637 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 19:00:50.868603  101637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 19:00:50.868629  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 19:00:50.868650  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 19:00:50.868669  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 19:00:50.868689  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 19:00:50.868706  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 19:00:50.868722  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 19:00:50.868747  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 19:00:50.868765  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 19:00:50.868857  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:00:50.868894  101637 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:00:50.868907  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:00:50.868939  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:00:50.868965  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:00:50.869002  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:00:50.869056  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:00:50.869099  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:50.869120  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 19:00:50.869139  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 19:00:50.870012  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:00:50.932801  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:00:50.958694  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:00:50.985090  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:00:51.011010  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0417 19:00:51.037170  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:00:51.063677  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:00:51.090502  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:00:51.117585  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:00:51.144390  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:00:51.170142  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:00:51.195426  101637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:00:51.213820  101637 ssh_runner.go:195] Run: openssl version
	I0417 19:00:51.220452  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:00:51.232880  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.237923  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.237988  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.244487  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:00:51.255292  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:00:51.267739  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.272618  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.272674  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.278791  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:00:51.289343  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:00:51.301359  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.306088  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.306164  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.312212  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:00:51.322447  101637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:00:51.327322  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:00:51.333432  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:00:51.339412  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:00:51.345743  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:00:51.351477  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:00:51.357219  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:00:51.363079  101637 kubeadm.go:391] StartCluster: {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2
ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:00:51.363221  101637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:00:51.363279  101637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:00:51.402856  101637 cri.go:89] found id: "136f02122ce7924cf7f7ef186573da1bc32d8e2704a97929d667ffdbc7943e4d"
	I0417 19:00:51.402879  101637 cri.go:89] found id: "7b12c8f9b974c559cc41e8f0b2e4b7085869c520ca4b2dee99eb44c5ea5aff3f"
	I0417 19:00:51.402884  101637 cri.go:89] found id: "c2109a45792da7037c574a8a67e646018a64869de55726de271a3c5ef3408599"
	I0417 19:00:51.402888  101637 cri.go:89] found id: "63d24dc45a18c29a0b2653723a7efaabcd1624b86e44e3e893975eddb4ed25c9"
	I0417 19:00:51.402892  101637 cri.go:89] found id: "92b4d2fc69ea2149747a0aeb92fc1bb791df4389dd47840bd21313c1cd295cb7"
	I0417 19:00:51.402896  101637 cri.go:89] found id: "143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0"
	I0417 19:00:51.402900  101637 cri.go:89] found id: "56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230"
	I0417 19:00:51.402904  101637 cri.go:89] found id: "2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d"
	I0417 19:00:51.402908  101637 cri.go:89] found id: "fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1"
	I0417 19:00:51.402918  101637 cri.go:89] found id: "c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b"
	I0417 19:00:51.402922  101637 cri.go:89] found id: "0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04"
	I0417 19:00:51.402926  101637 cri.go:89] found id: "d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f"
	I0417 19:00:51.402930  101637 cri.go:89] found id: "644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e"
	I0417 19:00:51.402934  101637 cri.go:89] found id: "7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c"
	I0417 19:00:51.402940  101637 cri.go:89] found id: ""
	I0417 19:00:51.402987  101637 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.837646390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380598837621259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fccce2b-c98b-424c-a377-f7fde6db688a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.838250402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45b90162-7514-42c8-845e-00151d2db7dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.838309991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45b90162-7514-42c8-845e-00151d2db7dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.838877157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da2
4966583c20ca587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86
593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash:
d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35
c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXI
TED,CreatedAt:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45b90162-7514-42c8-845e-00151d2db7dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.917999907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bec42f37-253f-4fda-aab6-633fd2f71d58 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.918177690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bec42f37-253f-4fda-aab6-633fd2f71d58 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.920989963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ee2777f-9735-45a4-91ab-7f1fa09d257c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.921527385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380598921497230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ee2777f-9735-45a4-91ab-7f1fa09d257c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.922209299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59237ebe-4873-417e-8c30-9f41f605b83c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.922276253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59237ebe-4873-417e-8c30-9f41f605b83c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.922772053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da2
4966583c20ca587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86
593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash:
d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35
c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXI
TED,CreatedAt:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59237ebe-4873-417e-8c30-9f41f605b83c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.991938810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43c75ecf-4310-4799-9e6a-ba116b7b93f0 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.992051533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43c75ecf-4310-4799-9e6a-ba116b7b93f0 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.994700443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c95a3033-cdb7-4321-8937-b02093c05dbc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.995255484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380598995226754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c95a3033-cdb7-4321-8937-b02093c05dbc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.996003955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e39f800e-4158-4424-bb84-7b3a5bb05541 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.996064520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e39f800e-4158-4424-bb84-7b3a5bb05541 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:18 ha-467706 crio[3936]: time="2024-04-17 19:03:18.996613458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da2
4966583c20ca587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86
593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash:
d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35
c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXI
TED,CreatedAt:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e39f800e-4158-4424-bb84-7b3a5bb05541 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.043982730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92d7c9fe-aa95-4818-ace0-cdf00f231ffe name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.044065888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92d7c9fe-aa95-4818-ace0-cdf00f231ffe name=/runtime.v1.RuntimeService/Version
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.046022979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5aafa1b5-5e87-44af-b797-43166b21c73a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.046667936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380599046640310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5aafa1b5-5e87-44af-b797-43166b21c73a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.047426048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eaec29f-07f4-4e96-b096-78d2d87f8004 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.047510374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eaec29f-07f4-4e96-b096-78d2d87f8004 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:03:19 ha-467706 crio[3936]: time="2024-04-17 19:03:19.048032592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da2
4966583c20ca587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86
593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash:
d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35
c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXI
TED,CreatedAt:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eaec29f-07f4-4e96-b096-78d2d87f8004 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f516a8700689c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   ea88d6e24c8ce       kindnet-hspjv
	8d919d90d622c       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      About a minute ago   Running             kube-controller-manager   2                   0b17aae28bd5c       kube-controller-manager-ha-467706
	01db93220056c       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      About a minute ago   Running             kube-apiserver            3                   03c3df56890dc       kube-apiserver-ha-467706
	3f2a094ca0704       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   7aa7198c72741       busybox-fc5497c4f-r65s7
	4926fd2766532       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   646502d008278       kube-vip-ha-467706
	aaaccc1b9eff4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   ea88d6e24c8ce       kindnet-hspjv
	20b5bc576bf50       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0dfa2b67cf18d       coredns-7db6d8ff4d-56dz8
	a8435a798cb32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   18b1038ab4c44       coredns-7db6d8ff4d-kcdqn
	8629e94f9d56f       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      2 minutes ago        Running             kube-proxy                1                   d139a082bcd0e       kube-proxy-hd469
	45486612565dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   d13ddd1e6d9bd       storage-provisioner
	d6429c5525ff3       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      2 minutes ago        Running             kube-scheduler            1                   75507436451ad       kube-scheduler-ha-467706
	3a5559b8f0836       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      2 minutes ago        Exited              kube-controller-manager   1                   0b17aae28bd5c       kube-controller-manager-ha-467706
	d747907756c6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   1c643006fcb2c       etcd-ha-467706
	3057b83de2f13       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      2 minutes ago        Exited              kube-apiserver            2                   03c3df56890dc       kube-apiserver-ha-467706
	93e18e5085cb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   1b57101a3681c       busybox-fc5497c4f-r65s7
	143bf06c19825       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   0836a6cd9f827       coredns-7db6d8ff4d-kcdqn
	56dd0755cda79       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   2887673f339d8       coredns-7db6d8ff4d-56dz8
	fe8aab67cc372       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      13 minutes ago       Exited              kube-proxy                0                   269ac099b43b4       kube-proxy-hd469
	0b4b6b19cdcea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   167b41d6ec7a7       etcd-ha-467706
	7f539c70ed4df       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      13 minutes ago       Exited              kube-scheduler            0                   18f84a94ee364       kube-scheduler-ha-467706
	
	
	==> coredns [143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0] <==
	[INFO] 10.244.0.4:41971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086394s
	[INFO] 10.244.1.2:45052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204533s
	[INFO] 10.244.1.2:56976 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00191173s
	[INFO] 10.244.1.2:48269 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205618s
	[INFO] 10.244.1.2:41050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145556s
	[INFO] 10.244.1.2:40399 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367129s
	[INFO] 10.244.1.2:34908 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024876s
	[INFO] 10.244.1.2:33490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115098s
	[INFO] 10.244.1.2:43721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162828s
	[INFO] 10.244.2.2:52076 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338786s
	[INFO] 10.244.0.4:58146 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084273s
	[INFO] 10.244.0.4:46620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011163s
	[INFO] 10.244.1.2:55749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161622s
	[INFO] 10.244.1.2:50475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112723s
	[INFO] 10.244.2.2:58296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123831s
	[INFO] 10.244.2.2:42756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149112s
	[INFO] 10.244.2.2:44779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135979s
	[INFO] 10.244.0.4:32859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254227s
	[INFO] 10.244.0.4:39694 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091483s
	[INFO] 10.244.1.2:48582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162571s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [20b5bc576bf50109f1f95724ecc7da24966583c20ca587b933fec546d96bf0a5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1235845291]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:08.587) (total time: 10001ms):
	Trace[1235845291]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:01:18.588)
	Trace[1235845291]: [10.001416671s] [10.001416671s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1124796004]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:10.652) (total time: 11875ms):
	Trace[1124796004]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer 11875ms (19:01:22.528)
	Trace[1124796004]: [11.875864602s] [11.875864602s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230] <==
	[INFO] 10.244.2.2:44063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012356307s
	[INFO] 10.244.2.2:52058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226126s
	[INFO] 10.244.2.2:45346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192986s
	[INFO] 10.244.0.4:42980 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001884522s
	[INFO] 10.244.0.4:33643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177169s
	[INFO] 10.244.0.4:55640 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105826s
	[INFO] 10.244.0.4:54019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112453s
	[INFO] 10.244.2.2:41133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144651s
	[INFO] 10.244.2.2:59362 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099749s
	[INFO] 10.244.2.2:32859 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102166s
	[INFO] 10.244.0.4:33356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105006s
	[INFO] 10.244.0.4:56803 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133644s
	[INFO] 10.244.1.2:34244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104241s
	[INFO] 10.244.1.2:43628 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148576s
	[INFO] 10.244.2.2:50718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000190649s
	[INFO] 10.244.0.4:44677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013354s
	[INFO] 10.244.0.4:45227 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159231s
	[INFO] 10.244.1.2:46121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135561s
	[INFO] 10.244.1.2:43459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116038s
	[INFO] 10.244.1.2:34953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088316s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1143499581]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:05.981) (total time: 10000ms):
	Trace[1143499581]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:01:15.982)
	Trace[1143499581]: [10.000934854s] [10.000934854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49070->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49070->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49082->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49082->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-467706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:03:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:01:40 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:01:40 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:01:40 +0000   Wed, 17 Apr 2024 18:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:01:40 +0000   Wed, 17 Apr 2024 18:50:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    ha-467706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3208cc9eadd3453fab86398575c87f4f
	  System UUID:                3208cc9e-add3-453f-ab86-398575c87f4f
	  Boot ID:                    142d9103-8e77-48a0-a260-5d3c6e2e5842
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r65s7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-56dz8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-kcdqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-467706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hspjv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-467706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-467706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hd469                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-467706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-467706                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 98s    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-467706 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-467706 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-467706 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-467706 status is now: NodeReady
	  Normal   RegisteredNode           11m    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Warning  ContainerGCFailed        3m15s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           91s    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           85s    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           24s    node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	
	
	Name:               ha-467706-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:51:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:03:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:02:24 +0000   Wed, 17 Apr 2024 19:01:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:02:24 +0000   Wed, 17 Apr 2024 19:01:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:02:24 +0000   Wed, 17 Apr 2024 19:01:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:02:24 +0000   Wed, 17 Apr 2024 19:01:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.236
	  Hostname:    ha-467706-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f49a89e0b0d7432fa507fd1ad108778d
	  System UUID:                f49a89e0-b0d7-432f-a507-fd1ad108778d
	  Boot ID:                    c9888712-c2eb-47f0-864e-09a7afab5132
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xg855                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-467706-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-k6b9s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-467706-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-467706-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-qxtf4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-467706-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-467706-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  NodeNotReady             8m40s                node-controller  Node ha-467706-m02 status is now: NodeNotReady
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           85s                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           24s                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	
	
	Name:               ha-467706-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_52_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:52:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:03:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:02:56 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:02:56 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:02:56 +0000   Wed, 17 Apr 2024 18:52:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:02:56 +0000   Wed, 17 Apr 2024 18:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-467706-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f9c357ef5b24ca6b2e9c8c989ff32f8
	  System UUID:                6f9c357e-f5b2-4ca6-b2e9-c8c989ff32f8
	  Boot ID:                    e3da4ee6-63a8-42c1-b01b-33c9437079b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzsn2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-467706-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-5mvhn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-467706-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-467706-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-jlcq7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-467706-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-467706-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-467706-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-467706-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node ha-467706-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node ha-467706-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node ha-467706-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 53s                kubelet          Node ha-467706-m03 has been rebooted, boot id: e3da4ee6-63a8-42c1-b01b-33c9437079b7
	  Normal   RegisteredNode           24s                node-controller  Node ha-467706-m03 event: Registered Node ha-467706-m03 in Controller
	
	
	Name:               ha-467706-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_53_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:53:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:03:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:03:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:03:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:03:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:03:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-467706-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd00fe12f3b54ed0af7c6ee4cc75cc20
	  System UUID:                dd00fe12-f3b5-4ed0-af7c-6ee4cc75cc20
	  Boot ID:                    af632482-8681-49b3-9b27-9fe4b73b9f20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-v8r5k       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-proxy-c7znr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 9m49s                  kube-proxy       
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m55s (x2 over 9m55s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m55s (x2 over 9m55s)  kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m55s (x2 over 9m55s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   NodeReady                9m45s                  kubelet          Node ha-467706-m04 status is now: NodeReady
	  Normal   RegisteredNode           91s                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   RegisteredNode           85s                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   NodeNotReady             51s                    node-controller  Node ha-467706-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           24s                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x3 over 9s)        kubelet          Node ha-467706-m04 has been rebooted, boot id: af632482-8681-49b3-9b27-9fe4b73b9f20
	  Normal   NodeHasSufficientMemory  9s (x4 over 9s)        kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x4 over 9s)        kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x4 over 9s)        kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                     kubelet          Node ha-467706-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s (x2 over 9s)        kubelet          Node ha-467706-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063801] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068254] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163928] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151596] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.299739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.466528] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062588] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.060993] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.990848] kauditd_printk_skb: 62 callbacks suppressed
	[Apr17 18:50] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.084981] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.627165] kauditd_printk_skb: 21 callbacks suppressed
	[Apr17 18:51] kauditd_printk_skb: 72 callbacks suppressed
	[Apr17 18:57] kauditd_printk_skb: 1 callbacks suppressed
	[Apr17 19:00] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.150432] systemd-fstab-generator[3867]: Ignoring "noauto" option for root device
	[  +0.187265] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.155375] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[  +0.300161] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[  +0.869428] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +6.236124] kauditd_printk_skb: 122 callbacks suppressed
	[Apr17 19:01] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.066335] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.090608] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.303219] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04] <==
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.639996Z","time spent":"7.629506381s","remote":"127.0.0.1:57240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269516Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.652689Z","time spent":"7.616824003s","remote":"127.0.0.1:57194","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.645049Z","time spent":"7.624473999s","remote":"127.0.0.1:54250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-17T18:59:17.31768Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f0ef8018a32f46af","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-17T18:59:17.317958Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318007Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318064Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318256Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318312Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318375Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318409Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318433Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318494Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.31857Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318617Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318664Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318692Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.321891Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-04-17T18:59:17.322057Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-04-17T18:59:17.322165Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-467706","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	
	
	==> etcd [d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584] <==
	{"level":"warn","ts":"2024-04-17T19:02:28.273881Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:28.274126Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:29.831216Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:29.831282Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:32.337318Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9a927b09686f3923","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"106.100038ms"}
	{"level":"warn","ts":"2024-04-17T19:02:32.337446Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"de2aef77ed8335d","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"106.233774ms"}
	{"level":"info","ts":"2024-04-17T19:02:32.337735Z","caller":"traceutil/trace.go:171","msg":"trace[533880378] transaction","detail":"{read_only:false; response_revision:2271; number_of_response:1; }","duration":"279.173684ms","start":"2024-04-17T19:02:32.05848Z","end":"2024-04-17T19:02:32.337653Z","steps":["trace[533880378] 'process raft request'  (duration: 279.01634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:02:33.274958Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:33.275269Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:33.833867Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:33.833936Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:37.83688Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:37.836999Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"de2aef77ed8335d","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:38.275361Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-17T19:02:38.275441Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de2aef77ed8335d","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-17T19:02:38.699638Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:02:38.702235Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"de2aef77ed8335d","error":"failed to dial de2aef77ed8335d on stream MsgApp v2 (peer de2aef77ed8335d failed to find local node f0ef8018a32f46af)"}
	{"level":"info","ts":"2024-04-17T19:02:38.718728Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f0ef8018a32f46af","to":"de2aef77ed8335d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-17T19:02:38.718787Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:38.718802Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:38.732636Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f0ef8018a32f46af","to":"de2aef77ed8335d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-17T19:02:38.73278Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:38.792887Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:38.802354Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:43.610359Z","caller":"traceutil/trace.go:171","msg":"trace[188765169] transaction","detail":"{read_only:false; response_revision:2313; number_of_response:1; }","duration":"114.057089ms","start":"2024-04-17T19:02:43.49628Z","end":"2024-04-17T19:02:43.610337Z","steps":["trace[188765169] 'process raft request'  (duration: 113.890172ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:03:19 up 13 min,  0 users,  load average: 1.20, 0.89, 0.43
	Linux ha-467706 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6] <==
	I0417 19:00:58.117002       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0417 19:01:01.025231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0417 19:01:04.096947       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0417 19:01:15.104581       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0417 19:01:22.527869       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.106:56742->10.96.0.1:443: read: connection reset by peer
	I0417 19:01:25.528614       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28] <==
	I0417 19:02:45.469476       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:02:55.508539       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:02:55.508588       1 main.go:227] handling current node
	I0417 19:02:55.508600       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:02:55.508605       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:02:55.508714       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 19:02:55.508719       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 19:02:55.508756       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:02:55.508761       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:03:05.524366       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:03:05.524428       1 main.go:227] handling current node
	I0417 19:03:05.524469       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:03:05.524476       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:03:05.524744       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 19:03:05.524785       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 19:03:05.524909       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:03:05.524936       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:03:15.542425       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:03:15.542481       1 main.go:227] handling current node
	I0417 19:03:15.542502       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:03:15.542508       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:03:15.542693       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0417 19:03:15.542704       1 main.go:250] Node ha-467706-m03 has CIDR [10.244.2.0/24] 
	I0417 19:03:15.542748       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:03:15.542779       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7] <==
	I0417 19:01:40.302700       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:01:40.302813       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:01:40.362205       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:01:40.370399       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:01:40.370506       1 policy_source.go:224] refreshing policies
	I0417 19:01:40.379876       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 19:01:40.380481       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:01:40.380632       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0417 19:01:40.380661       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:01:40.381351       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:01:40.386704       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:01:40.391317       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:01:40.395290       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:01:40.395335       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:01:40.395352       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:01:40.395357       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:01:40.395362       1 cache.go:39] Caches are synced for autoregister controller
	W0417 19:01:40.398554       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.236 192.168.39.250]
	I0417 19:01:40.399780       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:01:40.402764       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 19:01:40.406966       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0417 19:01:40.416425       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0417 19:01:41.327366       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0417 19:01:42.660827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.236 192.168.39.250]
	W0417 19:01:52.644647       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.236]
	
	
	==> kube-apiserver [3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505] <==
	I0417 19:00:58.074778       1 options.go:221] external host was not specified, using 192.168.39.159
	I0417 19:00:58.081032       1 server.go:148] Version: v1.30.0-rc.2
	I0417 19:00:58.081194       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:00:58.842384       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0417 19:00:58.855810       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0417 19:00:58.856895       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0417 19:00:58.859962       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:00:58.860191       1 instance.go:299] Using reconciler: lease
	W0417 19:01:18.839034       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0417 19:01:18.839271       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0417 19:01:18.861671       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0] <==
	I0417 19:00:58.776058       1 serving.go:380] Generated self-signed cert in-memory
	I0417 19:00:59.297601       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.2"
	I0417 19:00:59.297687       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:00:59.300560       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0417 19:00:59.301891       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:00:59.302000       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:00:59.302350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0417 19:01:19.868039       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.159:8443/healthz\": dial tcp 192.168.39.159:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1] <==
	I0417 19:01:54.462209       1 shared_informer.go:320] Caches are synced for PVC protection
	I0417 19:01:54.464768       1 shared_informer.go:320] Caches are synced for stateful set
	I0417 19:01:54.474760       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0417 19:01:54.518828       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:01:54.551713       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:01:54.580195       1 shared_informer.go:320] Caches are synced for namespace
	I0417 19:01:54.589665       1 shared_informer.go:320] Caches are synced for attach detach
	I0417 19:01:54.623898       1 shared_informer.go:320] Caches are synced for persistent volume
	I0417 19:01:54.626528       1 shared_informer.go:320] Caches are synced for service account
	I0417 19:01:54.686502       1 shared_informer.go:320] Caches are synced for PV protection
	I0417 19:01:55.062605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:01:55.062653       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0417 19:01:55.088363       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:02:00.063125       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-ncm9g EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-ncm9g\": the object has been modified; please apply your changes to the latest version and try again"
	I0417 19:02:00.063738       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6747d279-1f54-4ee7-bfdc-60445a3f49bb", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-ncm9g EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-ncm9g": the object has been modified; please apply your changes to the latest version and try again
	I0417 19:02:00.086434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.019029ms"
	I0417 19:02:00.086596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.389µs"
	I0417 19:02:02.265828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.179µs"
	I0417 19:02:06.814012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.759481ms"
	I0417 19:02:06.814222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.149µs"
	I0417 19:02:27.023496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.57171ms"
	I0417 19:02:27.023739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.763µs"
	I0417 19:02:49.352191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.937536ms"
	I0417 19:02:49.352962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.012µs"
	I0417 19:03:10.732190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-467706-m04"
	
	
	==> kube-proxy [8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94] <==
	I0417 19:00:59.327365       1 server_linux.go:69] "Using iptables proxy"
	E0417 19:01:00.256828       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:03.328978       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:06.400738       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:12.543823       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:24.831694       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0417 19:01:41.256405       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0417 19:01:41.422268       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:01:41.422373       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:01:41.422410       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:01:41.431466       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:01:41.431783       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:01:41.431830       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:01:41.453239       1 config.go:192] "Starting service config controller"
	I0417 19:01:41.453295       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:01:41.453331       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:01:41.453336       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:01:41.472778       1 config.go:319] "Starting node config controller"
	I0417 19:01:41.472838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:01:41.555184       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:01:41.555259       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:01:41.580546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1] <==
	E0417 18:58:11.297686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:14.369905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:14.369985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:14.370189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:14.370413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:17.441076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:17.441345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:20.511694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:20.511840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:20.511923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:20.511967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:23.585005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:23.585151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:29.727565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:29.728063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:35.872844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:35.873036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:35.873557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:35.873617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:51.231559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:51.231633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:54.303628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:54.304252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:59:00.453520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:59:00.461295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c] <==
	E0417 18:59:08.953701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:08.982606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 18:59:08.983240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 18:59:09.004914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:59:09.004981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 18:59:09.297707       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 18:59:09.297774       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 18:59:09.594672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 18:59:09.594723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 18:59:09.832804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 18:59:09.833015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 18:59:10.722056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:10.722176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:10.841546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:10.841770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:11.280417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 18:59:11.280518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 18:59:15.518672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:15.518789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:17.228856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:59:17.228892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0417 18:59:17.229649       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0417 18:59:17.229795       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0417 18:59:17.248416       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0417 18:59:17.248620       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf] <==
	W0417 19:01:34.256329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.159:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:34.256465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.159:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:35.625853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:35.625913       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.159622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.159791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.328669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.328754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.469535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.469668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.823051       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.823211       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.051573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.159:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.051659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.159:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.322970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.323042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.336579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.336652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.452937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.453019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:40.319458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:01:40.320071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:01:40.319805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:01:40.320381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0417 19:01:58.978678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:02:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:02:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:02:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:02:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 19:02:05 ha-467706 kubelet[1377]: I0417 19:02:05.708582    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-r65s7" podStartSLOduration=554.818187277 podStartE2EDuration="9m16.708546349s" podCreationTimestamp="2024-04-17 18:52:49 +0000 UTC" firstStartedPulling="2024-04-17 18:52:50.559816959 +0000 UTC m=+166.451563350" lastFinishedPulling="2024-04-17 18:52:52.45017603 +0000 UTC m=+168.341922422" observedRunningTime="2024-04-17 18:52:53.092232211 +0000 UTC m=+168.983978622" watchObservedRunningTime="2024-04-17 19:02:05.708546349 +0000 UTC m=+721.600292759"
	Apr 17 19:02:09 ha-467706 kubelet[1377]: I0417 19:02:09.281711    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:02:09 ha-467706 kubelet[1377]: E0417 19:02:09.282417    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	Apr 17 19:02:21 ha-467706 kubelet[1377]: I0417 19:02:21.281077    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:02:21 ha-467706 kubelet[1377]: E0417 19:02:21.281740    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	Apr 17 19:02:32 ha-467706 kubelet[1377]: I0417 19:02:32.281609    1377 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-467706" podUID="b92af6a0-34f7-4bdb-b0c3-e2821f4e693c"
	Apr 17 19:02:32 ha-467706 kubelet[1377]: I0417 19:02:32.365475    1377 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-467706"
	Apr 17 19:02:34 ha-467706 kubelet[1377]: I0417 19:02:34.282182    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:02:34 ha-467706 kubelet[1377]: E0417 19:02:34.283856    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	Apr 17 19:02:34 ha-467706 kubelet[1377]: I0417 19:02:34.304028    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-467706" podStartSLOduration=2.303992031 podStartE2EDuration="2.303992031s" podCreationTimestamp="2024-04-17 19:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-17 19:02:34.302330354 +0000 UTC m=+750.194076769" watchObservedRunningTime="2024-04-17 19:02:34.303992031 +0000 UTC m=+750.195738442"
	Apr 17 19:02:48 ha-467706 kubelet[1377]: I0417 19:02:48.281232    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:02:48 ha-467706 kubelet[1377]: E0417 19:02:48.282844    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	Apr 17 19:03:01 ha-467706 kubelet[1377]: I0417 19:03:01.281571    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:03:01 ha-467706 kubelet[1377]: E0417 19:03:01.282278    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	Apr 17 19:03:04 ha-467706 kubelet[1377]: E0417 19:03:04.309995    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:03:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:03:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:03:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:03:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 19:03:14 ha-467706 kubelet[1377]: I0417 19:03:14.281975    1377 scope.go:117] "RemoveContainer" containerID="45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911"
	Apr 17 19:03:14 ha-467706 kubelet[1377]: E0417 19:03:14.285708    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b5a737ba-33c0-4c0d-ab14-fe98f2c6e903)\"" pod="kube-system/storage-provisioner" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:03:18.508175  102692 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18665-75973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-467706 -n ha-467706
helpers_test.go:261: (dbg) Run:  kubectl --context ha-467706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 stop -v=7 --alsologtostderr: exit status 82 (2m0.513406936s)

                                                
                                                
-- stdout --
	* Stopping node "ha-467706-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:03:38.766248  103090 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:03:38.766761  103090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:03:38.766838  103090 out.go:304] Setting ErrFile to fd 2...
	I0417 19:03:38.766856  103090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:03:38.767345  103090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:03:38.767857  103090 out.go:298] Setting JSON to false
	I0417 19:03:38.768011  103090 mustload.go:65] Loading cluster: ha-467706
	I0417 19:03:38.768475  103090 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:03:38.768587  103090 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 19:03:38.768802  103090 mustload.go:65] Loading cluster: ha-467706
	I0417 19:03:38.768942  103090 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:03:38.768969  103090 stop.go:39] StopHost: ha-467706-m04
	I0417 19:03:38.769349  103090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:03:38.769414  103090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:03:38.785289  103090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0417 19:03:38.785730  103090 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:03:38.786371  103090 main.go:141] libmachine: Using API Version  1
	I0417 19:03:38.786401  103090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:03:38.786779  103090 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:03:38.789465  103090 out.go:177] * Stopping node "ha-467706-m04"  ...
	I0417 19:03:38.791048  103090 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0417 19:03:38.791093  103090 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 19:03:38.791329  103090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0417 19:03:38.791349  103090 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 19:03:38.794121  103090 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:03:38.794569  103090 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 20:03:05 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 19:03:38.794594  103090 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:03:38.794805  103090 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 19:03:38.795023  103090 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 19:03:38.795232  103090 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 19:03:38.795408  103090 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	I0417 19:03:38.885345  103090 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0417 19:03:38.940024  103090 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0417 19:03:38.996308  103090 main.go:141] libmachine: Stopping "ha-467706-m04"...
	I0417 19:03:38.996344  103090 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 19:03:38.998138  103090 main.go:141] libmachine: (ha-467706-m04) Calling .Stop
	I0417 19:03:39.001830  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 0/120
	I0417 19:03:40.004379  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 1/120
	I0417 19:03:41.005825  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 2/120
	I0417 19:03:42.007211  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 3/120
	I0417 19:03:43.009405  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 4/120
	I0417 19:03:44.011654  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 5/120
	I0417 19:03:45.013040  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 6/120
	I0417 19:03:46.015297  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 7/120
	I0417 19:03:47.016627  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 8/120
	I0417 19:03:48.018468  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 9/120
	I0417 19:03:49.019939  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 10/120
	I0417 19:03:50.021290  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 11/120
	I0417 19:03:51.022827  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 12/120
	I0417 19:03:52.024746  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 13/120
	I0417 19:03:53.026188  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 14/120
	I0417 19:03:54.028421  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 15/120
	I0417 19:03:55.030680  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 16/120
	I0417 19:03:56.032119  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 17/120
	I0417 19:03:57.033930  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 18/120
	I0417 19:03:58.035474  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 19/120
	I0417 19:03:59.037841  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 20/120
	I0417 19:04:00.039270  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 21/120
	I0417 19:04:01.040839  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 22/120
	I0417 19:04:02.042286  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 23/120
	I0417 19:04:03.043624  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 24/120
	I0417 19:04:04.045961  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 25/120
	I0417 19:04:05.048046  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 26/120
	I0417 19:04:06.049939  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 27/120
	I0417 19:04:07.051435  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 28/120
	I0417 19:04:08.052974  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 29/120
	I0417 19:04:09.055292  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 30/120
	I0417 19:04:10.057443  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 31/120
	I0417 19:04:11.058766  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 32/120
	I0417 19:04:12.060251  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 33/120
	I0417 19:04:13.061587  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 34/120
	I0417 19:04:14.063210  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 35/120
	I0417 19:04:15.065262  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 36/120
	I0417 19:04:16.067261  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 37/120
	I0417 19:04:17.068701  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 38/120
	I0417 19:04:18.070241  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 39/120
	I0417 19:04:19.072594  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 40/120
	I0417 19:04:20.074124  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 41/120
	I0417 19:04:21.075591  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 42/120
	I0417 19:04:22.076782  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 43/120
	I0417 19:04:23.078139  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 44/120
	I0417 19:04:24.080185  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 45/120
	I0417 19:04:25.081537  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 46/120
	I0417 19:04:26.083155  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 47/120
	I0417 19:04:27.084504  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 48/120
	I0417 19:04:28.085859  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 49/120
	I0417 19:04:29.087543  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 50/120
	I0417 19:04:30.089343  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 51/120
	I0417 19:04:31.091336  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 52/120
	I0417 19:04:32.092820  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 53/120
	I0417 19:04:33.095116  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 54/120
	I0417 19:04:34.097138  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 55/120
	I0417 19:04:35.098901  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 56/120
	I0417 19:04:36.100325  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 57/120
	I0417 19:04:37.101953  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 58/120
	I0417 19:04:38.103381  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 59/120
	I0417 19:04:39.105279  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 60/120
	I0417 19:04:40.107525  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 61/120
	I0417 19:04:41.109092  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 62/120
	I0417 19:04:42.111544  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 63/120
	I0417 19:04:43.113626  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 64/120
	I0417 19:04:44.115853  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 65/120
	I0417 19:04:45.117155  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 66/120
	I0417 19:04:46.119506  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 67/120
	I0417 19:04:47.120945  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 68/120
	I0417 19:04:48.122362  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 69/120
	I0417 19:04:49.123787  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 70/120
	I0417 19:04:50.125327  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 71/120
	I0417 19:04:51.127465  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 72/120
	I0417 19:04:52.129314  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 73/120
	I0417 19:04:53.131516  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 74/120
	I0417 19:04:54.133510  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 75/120
	I0417 19:04:55.135302  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 76/120
	I0417 19:04:56.137729  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 77/120
	I0417 19:04:57.139560  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 78/120
	I0417 19:04:58.141846  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 79/120
	I0417 19:04:59.144238  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 80/120
	I0417 19:05:00.145886  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 81/120
	I0417 19:05:01.148329  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 82/120
	I0417 19:05:02.149968  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 83/120
	I0417 19:05:03.151486  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 84/120
	I0417 19:05:04.153720  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 85/120
	I0417 19:05:05.155157  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 86/120
	I0417 19:05:06.156673  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 87/120
	I0417 19:05:07.158282  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 88/120
	I0417 19:05:08.159791  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 89/120
	I0417 19:05:09.162163  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 90/120
	I0417 19:05:10.163580  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 91/120
	I0417 19:05:11.164980  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 92/120
	I0417 19:05:12.166616  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 93/120
	I0417 19:05:13.168065  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 94/120
	I0417 19:05:14.169724  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 95/120
	I0417 19:05:15.171392  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 96/120
	I0417 19:05:16.173067  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 97/120
	I0417 19:05:17.174649  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 98/120
	I0417 19:05:18.176124  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 99/120
	I0417 19:05:19.178436  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 100/120
	I0417 19:05:20.179881  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 101/120
	I0417 19:05:21.181312  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 102/120
	I0417 19:05:22.182809  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 103/120
	I0417 19:05:23.184353  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 104/120
	I0417 19:05:24.185929  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 105/120
	I0417 19:05:25.187348  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 106/120
	I0417 19:05:26.188945  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 107/120
	I0417 19:05:27.190384  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 108/120
	I0417 19:05:28.191814  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 109/120
	I0417 19:05:29.194239  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 110/120
	I0417 19:05:30.196165  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 111/120
	I0417 19:05:31.197657  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 112/120
	I0417 19:05:32.199295  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 113/120
	I0417 19:05:33.201635  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 114/120
	I0417 19:05:34.203805  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 115/120
	I0417 19:05:35.205914  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 116/120
	I0417 19:05:36.207342  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 117/120
	I0417 19:05:37.208894  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 118/120
	I0417 19:05:38.210349  103090 main.go:141] libmachine: (ha-467706-m04) Waiting for machine to stop 119/120
	I0417 19:05:39.211308  103090 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0417 19:05:39.211381  103090 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0417 19:05:39.213764  103090 out.go:177] 
	W0417 19:05:39.215528  103090 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0417 19:05:39.215554  103090 out.go:239] * 
	* 
	W0417 19:05:39.218749  103090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0417 19:05:39.220419  103090 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-467706 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr: exit status 3 (19.130173884s)

                                                
                                                
-- stdout --
	ha-467706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-467706-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:05:39.284332  103400 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:05:39.284612  103400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:05:39.284622  103400 out.go:304] Setting ErrFile to fd 2...
	I0417 19:05:39.284626  103400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:05:39.284859  103400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:05:39.285169  103400 out.go:298] Setting JSON to false
	I0417 19:05:39.285202  103400 mustload.go:65] Loading cluster: ha-467706
	I0417 19:05:39.285244  103400 notify.go:220] Checking for updates...
	I0417 19:05:39.285787  103400 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:05:39.285881  103400 status.go:255] checking status of ha-467706 ...
	I0417 19:05:39.286441  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.286500  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.307041  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0417 19:05:39.307532  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.308160  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.308211  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.308892  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.309174  103400 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 19:05:39.311019  103400 status.go:330] ha-467706 host status = "Running" (err=<nil>)
	I0417 19:05:39.311041  103400 host.go:66] Checking if "ha-467706" exists ...
	I0417 19:05:39.311510  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.311568  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.327949  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0417 19:05:39.328355  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.328868  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.328890  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.329226  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.329442  103400 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 19:05:39.332458  103400 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:05:39.333016  103400 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:05:39.333044  103400 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:05:39.333210  103400 host.go:66] Checking if "ha-467706" exists ...
	I0417 19:05:39.333489  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.333541  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.348597  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0417 19:05:39.349034  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.349510  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.349531  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.349873  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.350071  103400 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:05:39.350261  103400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:05:39.350297  103400 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:05:39.352901  103400 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:05:39.353390  103400 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:05:39.353415  103400 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:05:39.353544  103400 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:05:39.353745  103400 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:05:39.353934  103400 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:05:39.354130  103400 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:05:39.438311  103400 ssh_runner.go:195] Run: systemctl --version
	I0417 19:05:39.445923  103400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:05:39.465450  103400 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 19:05:39.465501  103400 api_server.go:166] Checking apiserver status ...
	I0417 19:05:39.465555  103400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:05:39.486034  103400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5171/cgroup
	W0417 19:05:39.498814  103400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5171/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 19:05:39.498887  103400 ssh_runner.go:195] Run: ls
	I0417 19:05:39.504379  103400 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 19:05:39.509231  103400 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 19:05:39.509256  103400 status.go:422] ha-467706 apiserver status = Running (err=<nil>)
	I0417 19:05:39.509266  103400 status.go:257] ha-467706 status: &{Name:ha-467706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:05:39.509297  103400 status.go:255] checking status of ha-467706-m02 ...
	I0417 19:05:39.509580  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.509620  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.524516  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0417 19:05:39.524981  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.525615  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.525662  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.526002  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.526256  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetState
	I0417 19:05:39.527920  103400 status.go:330] ha-467706-m02 host status = "Running" (err=<nil>)
	I0417 19:05:39.527942  103400 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 19:05:39.528366  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.528417  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.544410  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0417 19:05:39.544859  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.545307  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.545328  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.545771  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.545956  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetIP
	I0417 19:05:39.549042  103400 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 19:05:39.549605  103400 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 20:01:03 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 19:05:39.549631  103400 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 19:05:39.549836  103400 host.go:66] Checking if "ha-467706-m02" exists ...
	I0417 19:05:39.550163  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.550216  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.566364  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0417 19:05:39.566835  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.567360  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.567398  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.567753  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.567990  103400 main.go:141] libmachine: (ha-467706-m02) Calling .DriverName
	I0417 19:05:39.568219  103400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:05:39.568250  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHHostname
	I0417 19:05:39.571167  103400 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 19:05:39.571584  103400 main.go:141] libmachine: (ha-467706-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:50:50", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 20:01:03 +0000 UTC Type:0 Mac:52:54:00:d8:50:50 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-467706-m02 Clientid:01:52:54:00:d8:50:50}
	I0417 19:05:39.571626  103400 main.go:141] libmachine: (ha-467706-m02) DBG | domain ha-467706-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:d8:50:50 in network mk-ha-467706
	I0417 19:05:39.571773  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHPort
	I0417 19:05:39.571950  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHKeyPath
	I0417 19:05:39.572091  103400 main.go:141] libmachine: (ha-467706-m02) Calling .GetSSHUsername
	I0417 19:05:39.572240  103400 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m02/id_rsa Username:docker}
	I0417 19:05:39.659291  103400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:05:39.682345  103400 kubeconfig.go:125] found "ha-467706" server: "https://192.168.39.254:8443"
	I0417 19:05:39.682380  103400 api_server.go:166] Checking apiserver status ...
	I0417 19:05:39.682422  103400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:05:39.701899  103400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1600/cgroup
	W0417 19:05:39.714899  103400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1600/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 19:05:39.714966  103400 ssh_runner.go:195] Run: ls
	I0417 19:05:39.720530  103400 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 19:05:39.725170  103400 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 19:05:39.725211  103400 status.go:422] ha-467706-m02 apiserver status = Running (err=<nil>)
	I0417 19:05:39.725224  103400 status.go:257] ha-467706-m02 status: &{Name:ha-467706-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:05:39.725246  103400 status.go:255] checking status of ha-467706-m04 ...
	I0417 19:05:39.725701  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.725747  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.741360  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33209
	I0417 19:05:39.741850  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.742359  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.742384  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.742714  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.742961  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetState
	I0417 19:05:39.744418  103400 status.go:330] ha-467706-m04 host status = "Running" (err=<nil>)
	I0417 19:05:39.744435  103400 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 19:05:39.744725  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.744787  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.760874  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0417 19:05:39.761338  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.761876  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.761898  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.762256  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.762474  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetIP
	I0417 19:05:39.765545  103400 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:05:39.765976  103400 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 20:03:05 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 19:05:39.765998  103400 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:05:39.766146  103400 host.go:66] Checking if "ha-467706-m04" exists ...
	I0417 19:05:39.766445  103400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:05:39.766483  103400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:05:39.784217  103400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46325
	I0417 19:05:39.784764  103400 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:05:39.785318  103400 main.go:141] libmachine: Using API Version  1
	I0417 19:05:39.785349  103400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:05:39.785763  103400 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:05:39.785995  103400 main.go:141] libmachine: (ha-467706-m04) Calling .DriverName
	I0417 19:05:39.786230  103400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:05:39.786262  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHHostname
	I0417 19:05:39.789160  103400 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:05:39.789604  103400 main.go:141] libmachine: (ha-467706-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:5b:7b", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 20:03:05 +0000 UTC Type:0 Mac:52:54:00:33:5b:7b Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-467706-m04 Clientid:01:52:54:00:33:5b:7b}
	I0417 19:05:39.789642  103400 main.go:141] libmachine: (ha-467706-m04) DBG | domain ha-467706-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:33:5b:7b in network mk-ha-467706
	I0417 19:05:39.789916  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHPort
	I0417 19:05:39.790106  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHKeyPath
	I0417 19:05:39.790270  103400 main.go:141] libmachine: (ha-467706-m04) Calling .GetSSHUsername
	I0417 19:05:39.790519  103400 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706-m04/id_rsa Username:docker}
	W0417 19:05:58.352961  103400 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.203:22: connect: no route to host
	W0417 19:05:58.353109  103400 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E0417 19:05:58.353132  103400 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	I0417 19:05:58.353144  103400 status.go:257] ha-467706-m04 status: &{Name:ha-467706-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0417 19:05:58.353170  103400 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-467706 -n ha-467706
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-467706 logs -n 25: (1.965578337s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m04 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp testdata/cp-test.txt                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706:/home/docker/cp-test_ha-467706-m04_ha-467706.txt                       |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706 sudo cat                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706.txt                                 |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m02:/home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m02 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m03:/home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n                                                                 | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | ha-467706-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-467706 ssh -n ha-467706-m03 sudo cat                                          | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC | 17 Apr 24 18:53 UTC |
	|         | /home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-467706 node stop m02 -v=7                                                     | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-467706 node start m02 -v=7                                                    | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-467706 -v=7                                                           | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-467706 -v=7                                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-467706 --wait=true -v=7                                                    | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:59 UTC | 17 Apr 24 19:03 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-467706                                                                | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:03 UTC |                     |
	| node    | ha-467706 node delete m03 -v=7                                                   | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:03 UTC | 17 Apr 24 19:03 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-467706 stop -v=7                                                              | ha-467706 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 18:59:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 18:59:16.161168  101637 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:59:16.161416  101637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:59:16.161424  101637 out.go:304] Setting ErrFile to fd 2...
	I0417 18:59:16.161429  101637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:59:16.161600  101637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:59:16.162161  101637 out.go:298] Setting JSON to false
	I0417 18:59:16.163042  101637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9704,"bootTime":1713370652,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:59:16.163108  101637 start.go:139] virtualization: kvm guest
	I0417 18:59:16.165441  101637 out.go:177] * [ha-467706] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:59:16.167040  101637 notify.go:220] Checking for updates...
	I0417 18:59:16.167048  101637 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:59:16.168655  101637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:59:16.170321  101637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:59:16.171696  101637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:59:16.173643  101637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:59:16.174954  101637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:59:16.176897  101637 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:59:16.177114  101637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:59:16.177714  101637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:59:16.177770  101637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:59:16.193529  101637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0417 18:59:16.193931  101637 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:59:16.194519  101637 main.go:141] libmachine: Using API Version  1
	I0417 18:59:16.194543  101637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:59:16.194921  101637 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:59:16.195101  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.232622  101637 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 18:59:16.234260  101637 start.go:297] selected driver: kvm2
	I0417 18:59:16.234281  101637 start.go:901] validating driver "kvm2" against &{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:59:16.234478  101637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:59:16.234851  101637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:59:16.234957  101637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 18:59:16.250785  101637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 18:59:16.251868  101637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 18:59:16.251980  101637 cni.go:84] Creating CNI manager for ""
	I0417 18:59:16.251992  101637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0417 18:59:16.252120  101637 start.go:340] cluster config:
	{Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.1
68.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:59:16.252414  101637 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 18:59:16.254491  101637 out.go:177] * Starting "ha-467706" primary control-plane node in "ha-467706" cluster
	I0417 18:59:16.255875  101637 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 18:59:16.255932  101637 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 18:59:16.255949  101637 cache.go:56] Caching tarball of preloaded images
	I0417 18:59:16.256029  101637 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 18:59:16.256042  101637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 18:59:16.256175  101637 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/config.json ...
	I0417 18:59:16.256404  101637 start.go:360] acquireMachinesLock for ha-467706: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 18:59:16.256465  101637 start.go:364] duration metric: took 37.118µs to acquireMachinesLock for "ha-467706"
	I0417 18:59:16.256487  101637 start.go:96] Skipping create...Using existing machine configuration
	I0417 18:59:16.256495  101637 fix.go:54] fixHost starting: 
	I0417 18:59:16.256806  101637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:59:16.256838  101637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:59:16.271803  101637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0417 18:59:16.272269  101637 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:59:16.272864  101637 main.go:141] libmachine: Using API Version  1
	I0417 18:59:16.272893  101637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:59:16.273208  101637 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:59:16.273421  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.273596  101637 main.go:141] libmachine: (ha-467706) Calling .GetState
	I0417 18:59:16.275481  101637 fix.go:112] recreateIfNeeded on ha-467706: state=Running err=<nil>
	W0417 18:59:16.275501  101637 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 18:59:16.278558  101637 out.go:177] * Updating the running kvm2 "ha-467706" VM ...
	I0417 18:59:16.280125  101637 machine.go:94] provisionDockerMachine start ...
	I0417 18:59:16.280152  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 18:59:16.280444  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.283206  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.283733  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.283761  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.283954  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.284136  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.284291  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.284389  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.284586  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.284846  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.284860  101637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 18:59:16.391588  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:59:16.391618  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.391897  101637 buildroot.go:166] provisioning hostname "ha-467706"
	I0417 18:59:16.391929  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.392134  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.395071  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.395519  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.395545  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.395761  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.395959  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.396197  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.396351  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.396533  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.396719  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.396734  101637 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-467706 && echo "ha-467706" | sudo tee /etc/hostname
	I0417 18:59:16.521926  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-467706
	
	I0417 18:59:16.521959  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.524731  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.525161  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.525194  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.525387  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.525588  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.525765  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.525903  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.526064  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:16.526287  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:16.526312  101637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-467706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-467706/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-467706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 18:59:16.630177  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 18:59:16.630223  101637 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 18:59:16.630249  101637 buildroot.go:174] setting up certificates
	I0417 18:59:16.630263  101637 provision.go:84] configureAuth start
	I0417 18:59:16.630278  101637 main.go:141] libmachine: (ha-467706) Calling .GetMachineName
	I0417 18:59:16.630606  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 18:59:16.633428  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.633805  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.633826  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.634000  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.636292  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.636630  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.636659  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.636789  101637 provision.go:143] copyHostCerts
	I0417 18:59:16.636819  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:59:16.636855  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 18:59:16.636875  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 18:59:16.636940  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 18:59:16.637032  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:59:16.637050  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 18:59:16.637061  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 18:59:16.637085  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 18:59:16.637125  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:59:16.637141  101637 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 18:59:16.637147  101637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 18:59:16.637166  101637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 18:59:16.637213  101637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.ha-467706 san=[127.0.0.1 192.168.39.159 ha-467706 localhost minikube]
	I0417 18:59:16.942977  101637 provision.go:177] copyRemoteCerts
	I0417 18:59:16.943041  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 18:59:16.943068  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:16.945848  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.946244  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:16.946268  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:16.946443  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:16.946660  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:16.946824  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:16.946973  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 18:59:17.032860  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 18:59:17.032932  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 18:59:17.059369  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 18:59:17.059443  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0417 18:59:17.085939  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 18:59:17.086012  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 18:59:17.112322  101637 provision.go:87] duration metric: took 482.044876ms to configureAuth
	I0417 18:59:17.112355  101637 buildroot.go:189] setting minikube options for container-runtime
	I0417 18:59:17.112607  101637 config.go:182] Loaded profile config "ha-467706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:59:17.112702  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 18:59:17.115466  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:17.115908  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 18:59:17.115945  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 18:59:17.116148  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 18:59:17.116355  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:17.116531  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 18:59:17.116654  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 18:59:17.116812  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 18:59:17.116988  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 18:59:17.117004  101637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:00:48.097917  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:00:48.097950  101637 machine.go:97] duration metric: took 1m31.817811387s to provisionDockerMachine
	I0417 19:00:48.097990  101637 start.go:293] postStartSetup for "ha-467706" (driver="kvm2")
	I0417 19:00:48.098010  101637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:00:48.098064  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.098382  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:00:48.098416  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.102058  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.102679  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.102718  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.102854  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.103073  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.103313  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.103520  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.185234  101637 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:00:48.189816  101637 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:00:48.189848  101637 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:00:48.189922  101637 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:00:48.190019  101637 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:00:48.190034  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 19:00:48.190146  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:00:48.201727  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:00:48.227988  101637 start.go:296] duration metric: took 129.978773ms for postStartSetup
	I0417 19:00:48.228038  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.228433  101637 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0417 19:00:48.228464  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.231137  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.231634  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.231663  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.231820  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.232076  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.232243  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.232408  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	W0417 19:00:48.312448  101637 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0417 19:00:48.312477  101637 fix.go:56] duration metric: took 1m32.055983625s for fixHost
	I0417 19:00:48.312499  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.315427  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.315841  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.315865  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.316091  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.316300  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.316497  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.316642  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.316815  101637 main.go:141] libmachine: Using SSH client type: native
	I0417 19:00:48.317030  101637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0417 19:00:48.317044  101637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 19:00:48.413977  101637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713380448.378851145
	
	I0417 19:00:48.414001  101637 fix.go:216] guest clock: 1713380448.378851145
	I0417 19:00:48.414008  101637 fix.go:229] Guest: 2024-04-17 19:00:48.378851145 +0000 UTC Remote: 2024-04-17 19:00:48.312483497 +0000 UTC m=+92.204953272 (delta=66.367648ms)
	I0417 19:00:48.414028  101637 fix.go:200] guest clock delta is within tolerance: 66.367648ms
	I0417 19:00:48.414034  101637 start.go:83] releasing machines lock for "ha-467706", held for 1m32.157556446s
	I0417 19:00:48.414051  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.414333  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 19:00:48.417074  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.417452  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.417484  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.417605  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418120  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418282  101637 main.go:141] libmachine: (ha-467706) Calling .DriverName
	I0417 19:00:48.418378  101637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:00:48.418444  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.418563  101637 ssh_runner.go:195] Run: cat /version.json
	I0417 19:00:48.418592  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHHostname
	I0417 19:00:48.421174  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421538  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421720  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.421742  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.421933  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.422020  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:48.422044  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:48.422108  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.422254  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHPort
	I0417 19:00:48.422291  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.422450  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHKeyPath
	I0417 19:00:48.422449  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.422607  101637 main.go:141] libmachine: (ha-467706) Calling .GetSSHUsername
	I0417 19:00:48.422768  101637 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/ha-467706/id_rsa Username:docker}
	I0417 19:00:48.518666  101637 ssh_runner.go:195] Run: systemctl --version
	I0417 19:00:48.525009  101637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:00:48.690941  101637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 19:00:48.697368  101637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:00:48.697454  101637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:00:48.708545  101637 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0417 19:00:48.708585  101637 start.go:494] detecting cgroup driver to use...
	I0417 19:00:48.708683  101637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:00:48.728162  101637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:00:48.743836  101637 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:00:48.743904  101637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:00:48.758958  101637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:00:48.773887  101637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:00:48.927506  101637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:00:49.080809  101637 docker.go:233] disabling docker service ...
	I0417 19:00:49.080910  101637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:00:49.098998  101637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:00:49.114271  101637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:00:49.268380  101637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:00:49.417855  101637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:00:49.432753  101637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:00:49.454659  101637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:00:49.454748  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.466397  101637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:00:49.466465  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.477794  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.489404  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.501053  101637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:00:49.514720  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.526494  101637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.540317  101637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:00:49.551931  101637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:00:49.562342  101637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:00:49.572515  101637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:00:49.729924  101637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:00:50.043377  101637 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:00:50.043450  101637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:00:50.048656  101637 start.go:562] Will wait 60s for crictl version
	I0417 19:00:50.048712  101637 ssh_runner.go:195] Run: which crictl
	I0417 19:00:50.053039  101637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:00:50.103281  101637 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:00:50.103413  101637 ssh_runner.go:195] Run: crio --version
	I0417 19:00:50.134323  101637 ssh_runner.go:195] Run: crio --version
	I0417 19:00:50.168920  101637 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 19:00:50.170840  101637 main.go:141] libmachine: (ha-467706) Calling .GetIP
	I0417 19:00:50.173677  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:50.174095  101637 main.go:141] libmachine: (ha-467706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c1:55", ip: ""} in network mk-ha-467706: {Iface:virbr1 ExpiryTime:2024-04-17 19:49:38 +0000 UTC Type:0 Mac:52:54:00:3b:c1:55 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:ha-467706 Clientid:01:52:54:00:3b:c1:55}
	I0417 19:00:50.174125  101637 main.go:141] libmachine: (ha-467706) DBG | domain ha-467706 has defined IP address 192.168.39.159 and MAC address 52:54:00:3b:c1:55 in network mk-ha-467706
	I0417 19:00:50.174371  101637 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:00:50.179425  101637 kubeadm.go:877] updating cluster {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc
.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:00:50.179572  101637 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:00:50.179655  101637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:00:50.228025  101637 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:00:50.228053  101637 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:00:50.228132  101637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:00:50.280514  101637 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:00:50.280543  101637 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:00:50.280552  101637 kubeadm.go:928] updating node { 192.168.39.159 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:00:50.280650  101637 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-467706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:00:50.280749  101637 ssh_runner.go:195] Run: crio config
	I0417 19:00:50.329894  101637 cni.go:84] Creating CNI manager for ""
	I0417 19:00:50.329924  101637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0417 19:00:50.329938  101637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:00:50.329979  101637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-467706 NodeName:ha-467706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:00:50.330119  101637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-467706"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:00:50.330139  101637 kube-vip.go:111] generating kube-vip config ...
	I0417 19:00:50.330178  101637 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0417 19:00:50.343385  101637 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0417 19:00:50.343507  101637 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0417 19:00:50.343566  101637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:00:50.354586  101637 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:00:50.354664  101637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0417 19:00:50.365419  101637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0417 19:00:50.383471  101637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:00:50.400939  101637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0417 19:00:50.418982  101637 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0417 19:00:50.437425  101637 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0417 19:00:50.442580  101637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:00:50.593051  101637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:00:50.608721  101637 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706 for IP: 192.168.39.159
	I0417 19:00:50.608747  101637 certs.go:194] generating shared ca certs ...
	I0417 19:00:50.608763  101637 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.608946  101637 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:00:50.609008  101637 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:00:50.609019  101637 certs.go:256] generating profile certs ...
	I0417 19:00:50.609279  101637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/client.key
	I0417 19:00:50.609317  101637 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643
	I0417 19:00:50.609339  101637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159 192.168.39.236 192.168.39.250 192.168.39.254]
	I0417 19:00:50.867791  101637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 ...
	I0417 19:00:50.867824  101637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643: {Name:mk5b418360c8bcc8349e4f44b04836a80d7be0aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.868030  101637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643 ...
	I0417 19:00:50.868052  101637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643: {Name:mk3b23f1d93c687c18fca647217d3523e9ea468d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:00:50.868165  101637 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt.29f9a643 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt
	I0417 19:00:50.868390  101637 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key.29f9a643 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key
	I0417 19:00:50.868603  101637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key
	I0417 19:00:50.868629  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 19:00:50.868650  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 19:00:50.868669  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 19:00:50.868689  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 19:00:50.868706  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 19:00:50.868722  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 19:00:50.868747  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 19:00:50.868765  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 19:00:50.868857  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:00:50.868894  101637 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:00:50.868907  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:00:50.868939  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:00:50.868965  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:00:50.869002  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:00:50.869056  101637 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:00:50.869099  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:50.869120  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 19:00:50.869139  101637 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 19:00:50.870012  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:00:50.932801  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:00:50.958694  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:00:50.985090  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:00:51.011010  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0417 19:00:51.037170  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:00:51.063677  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:00:51.090502  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/ha-467706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:00:51.117585  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:00:51.144390  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:00:51.170142  101637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:00:51.195426  101637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:00:51.213820  101637 ssh_runner.go:195] Run: openssl version
	I0417 19:00:51.220452  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:00:51.232880  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.237923  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.237988  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:00:51.244487  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:00:51.255292  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:00:51.267739  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.272618  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.272674  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:00:51.278791  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:00:51.289343  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:00:51.301359  101637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.306088  101637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.306164  101637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:00:51.312212  101637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:00:51.322447  101637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:00:51.327322  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:00:51.333432  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:00:51.339412  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:00:51.345743  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:00:51.351477  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:00:51.357219  101637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:00:51.363079  101637 kubeadm.go:391] StartCluster: {Name:ha-467706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2
ClusterName:ha-467706 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.236 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:00:51.363221  101637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:00:51.363279  101637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:00:51.402856  101637 cri.go:89] found id: "136f02122ce7924cf7f7ef186573da1bc32d8e2704a97929d667ffdbc7943e4d"
	I0417 19:00:51.402879  101637 cri.go:89] found id: "7b12c8f9b974c559cc41e8f0b2e4b7085869c520ca4b2dee99eb44c5ea5aff3f"
	I0417 19:00:51.402884  101637 cri.go:89] found id: "c2109a45792da7037c574a8a67e646018a64869de55726de271a3c5ef3408599"
	I0417 19:00:51.402888  101637 cri.go:89] found id: "63d24dc45a18c29a0b2653723a7efaabcd1624b86e44e3e893975eddb4ed25c9"
	I0417 19:00:51.402892  101637 cri.go:89] found id: "92b4d2fc69ea2149747a0aeb92fc1bb791df4389dd47840bd21313c1cd295cb7"
	I0417 19:00:51.402896  101637 cri.go:89] found id: "143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0"
	I0417 19:00:51.402900  101637 cri.go:89] found id: "56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230"
	I0417 19:00:51.402904  101637 cri.go:89] found id: "2f2ed526ef2f9ac7bfc7074a0949f7d97e5ea5227e7baf3882d897da4753705d"
	I0417 19:00:51.402908  101637 cri.go:89] found id: "fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1"
	I0417 19:00:51.402918  101637 cri.go:89] found id: "c2e7dc14e0398eb1a0fbe87ca0f8fc81d718a21d9d41e0f03029cf8ce888af8b"
	I0417 19:00:51.402922  101637 cri.go:89] found id: "0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04"
	I0417 19:00:51.402926  101637 cri.go:89] found id: "d1e96d91894cf944f472f9327a83705e9c8e8e4edd31fdb0a902e1b7d3b2d14f"
	I0417 19:00:51.402930  101637 cri.go:89] found id: "644754e2725b2c54326d7424afa917ad298b867b229958f776f120be3114457e"
	I0417 19:00:51.402934  101637 cri.go:89] found id: "7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c"
	I0417 19:00:51.402940  101637 cri.go:89] found id: ""
	I0417 19:00:51.402987  101637 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.015079191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380759015055783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=589a174e-b894-4197-a34a-86ec7bead1ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.015711161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdddad75-afaa-4a70-be3f-739494dbadf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.015786181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdddad75-afaa-4a70-be3f-739494dbadf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.016251632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b042a64d5d38ea0c4fde56edd54dab1f3419ce66fa063f63a4c7a32b1a50d1ed,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713380632296807698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8629e94f9d
56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da24966583c20ca
587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f
78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5
e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kuber
netes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52
250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedA
t:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdddad75-afaa-4a70-be3f-739494dbadf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.062877111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83865c4f-e024-4bed-81a4-345c6cd45a35 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.062955235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83865c4f-e024-4bed-81a4-345c6cd45a35 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.064718693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ba9384c-a1e5-41a7-a9c2-1531b5fce2bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.065592096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380759065563097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ba9384c-a1e5-41a7-a9c2-1531b5fce2bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.066172290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=195ebd7c-af86-4c5a-b69c-f24ffefa2d75 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.066232415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=195ebd7c-af86-4c5a-b69c-f24ffefa2d75 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.066626481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b042a64d5d38ea0c4fde56edd54dab1f3419ce66fa063f63a4c7a32b1a50d1ed,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713380632296807698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8629e94f9d
56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da24966583c20ca
587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f
78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5
e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kuber
netes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52
250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedA
t:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=195ebd7c-af86-4c5a-b69c-f24ffefa2d75 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.116284159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=728eaa50-0464-4b23-a668-3498e9f2fb0f name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.116417538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=728eaa50-0464-4b23-a668-3498e9f2fb0f name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.117804549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0da74743-12aa-4d11-a7ae-d3a95d7b2941 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.118448508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380759118423465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0da74743-12aa-4d11-a7ae-d3a95d7b2941 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.119189633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07e246fe-3808-4e67-a349-983bb901d307 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.119247014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07e246fe-3808-4e67-a349-983bb901d307 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.119655660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b042a64d5d38ea0c4fde56edd54dab1f3419ce66fa063f63a4c7a32b1a50d1ed,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713380632296807698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8629e94f9d
56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da24966583c20ca
587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f
78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5
e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kuber
netes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52
250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedA
t:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07e246fe-3808-4e67-a349-983bb901d307 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.169778098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=604faf61-424d-49a4-b65a-5fbd60b1d8b0 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.169866825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=604faf61-424d-49a4-b65a-5fbd60b1d8b0 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.171340783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c651a360-e8ff-45b2-a617-e8d92493b288 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.171761685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713380759171739064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144997,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c651a360-e8ff-45b2-a617-e8d92493b288 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.172591223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9794162-8975-4537-8e02-f10f21fd79ae name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.172651169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9794162-8975-4537-8e02-f10f21fd79ae name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:05:59 ha-467706 crio[3936]: time="2024-04-17 19:05:59.173555370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b042a64d5d38ea0c4fde56edd54dab1f3419ce66fa063f63a4c7a32b1a50d1ed,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713380632296807698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713380514308874000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713380501305374252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713380498300148300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f2a094ca0704c05b9c54c22f6d74cdcc31e87e6095d940954e8993f2a1c0d35,PodSandboxId:7aa7198c72741e9065c417239c44f13574ffbd592cabd50ce9d7ed82feb8a93c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713380490620935202,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4926fd2766532d1c8f6048e5e5bea3976ea6608fb6036b0bf469905f8ce7f2d6,PodSandboxId:646502d0082788ed235f466122b921f58518511c2838b89bc8fc560fe6ed764f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713380472691798068,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5be910bca98183ab37b557076ce4584,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:45486612565dc51ed7462a2e5b385e3efedb69c3d11fcaf1afb1da9dc6cd8911,PodSandboxId:d13ddd1e6d9bd3c5be66eb497eabb6e45770b56cc9e223b7f3795357fa8861a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713380457303448590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5a737ba-33c0-4c0d-ab14-fe98f2c6e903,},Annotations:map[string]string{io.kubernetes.container.hash: a7546df6,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6,PodSandboxId:ea88d6e24c8ce1db103dd94eaee99e0e0f9cbeb6402f4e70ba66051c58eb57e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713380457579639888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hspjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccc61fa-7766-431c-9f06-4fdfe455f551,},Annotations:map[string]string{io.kubernetes.container.hash: 56bcb65c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8629e94f9d
56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94,PodSandboxId:d139a082bcd0e226d07a1da5da6c67ae1bf0667085d8eab490dc15299d8a23b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713380457304449795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20b5bc576bf50109f1f95724ecc7da24966583c20ca
587b933fec546d96bf0a5,PodSandboxId:0dfa2b67cf18d839ab88e000e699d629aaf812f7032d533f77ea9be8c3cbff83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457534329599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315,PodSandboxId:18b1038ab4c449518ba074f3b4d3811f4b4b4910ddc4143e9a9ec088c1f80c09,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713380457467494064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505,PodSandboxId:03c3df56890dcc9110f6e228af6ef736143366afc2a98baa4aa854d630644eaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713380457133794130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45523f
78a5229d0b08cb428fcaedfed0,},Annotations:map[string]string{io.kubernetes.container.hash: 91cb0593,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0,PodSandboxId:0b17aae28bd5cd964c82ce421f80ea95561ae2612a174128f5317eeb95250053,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713380457259501797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
507e54022f894ce12e7854719296d07,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf,PodSandboxId:75507436451ad1bc02f399ffb0331ed2ff56dab8a074cec93abc97489b1a2d9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713380457260523573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5
e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584,PodSandboxId:1c643006fcb2c48323f8305442324279fc291cc18fc26617f3f5f8206d5fc805,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713380457180562292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kuber
netes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e18e5085cb7dd6c95fb8bf1229f0c45008cce304c1ead3cea2d6da36b66d51,PodSandboxId:1b57101a3681c5db6b4f34a77455f6f491097dd29c13b5296d6610accd0b65c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713379972464265623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-r65s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a14d0b32-aa41-4396-a2f7-643e8e32d96d,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8d68b365,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230,PodSandboxId:2887673f339d8aca82e90272f06ce0c4744ef4ec4b48d38aff27b25d5ec1db35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820245230115,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-56dz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242dc56e-69d4-4742-8c4a-26b465f94153,},Annotations:map[string]string{io.kubernetes.container.hash: d376a5ac,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0,PodSandboxId:0836a6cd9f827b7b8d4555cf10ba3a53dddaed91a13975d989668b12a77de7b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713379820253209762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-kcdqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5353b60b-c7db-4eac-b0e9-915a8df02ae6,},Annotations:map[string]string{io.kubernetes.container.hash: e7e6a88e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1,PodSandboxId:269ac099b43b4bbb9fbc405722fbbab237ab54aeef7290fe4b9e366401a294a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52
250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713379817830895547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec70213c-82da-44af-a5ef-34157c4edc01,},Annotations:map[string]string{io.kubernetes.container.hash: 936f368f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04,PodSandboxId:167b41d6ec7a7d2bd6a7a061a387fae932d17ae93a2969023dc60b85a3f84a9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1713379798099567901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5533a84d7b0b75b280465dc195066a47,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2bdf98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c,PodSandboxId:18f84a94ee364210a00aec2b2b51f0bb97230343abc962600ecf0e9b19dc4a08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedA
t:1713379797955188600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-467706,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771847de86593b1b57b1c5e8f0129c24,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9794162-8975-4537-8e02-f10f21fd79ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b042a64d5d38e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   d13ddd1e6d9bd       storage-provisioner
	f516a8700689c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   ea88d6e24c8ce       kindnet-hspjv
	8d919d90d622c       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      4 minutes ago       Running             kube-controller-manager   2                   0b17aae28bd5c       kube-controller-manager-ha-467706
	01db93220056c       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      4 minutes ago       Running             kube-apiserver            3                   03c3df56890dc       kube-apiserver-ha-467706
	3f2a094ca0704       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   7aa7198c72741       busybox-fc5497c4f-r65s7
	4926fd2766532       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   646502d008278       kube-vip-ha-467706
	aaaccc1b9eff4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   ea88d6e24c8ce       kindnet-hspjv
	20b5bc576bf50       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0dfa2b67cf18d       coredns-7db6d8ff4d-56dz8
	a8435a798cb32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   18b1038ab4c44       coredns-7db6d8ff4d-kcdqn
	8629e94f9d56f       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      5 minutes ago       Running             kube-proxy                1                   d139a082bcd0e       kube-proxy-hd469
	45486612565dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   d13ddd1e6d9bd       storage-provisioner
	d6429c5525ff3       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      5 minutes ago       Running             kube-scheduler            1                   75507436451ad       kube-scheduler-ha-467706
	3a5559b8f0836       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      5 minutes ago       Exited              kube-controller-manager   1                   0b17aae28bd5c       kube-controller-manager-ha-467706
	d747907756c6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   1c643006fcb2c       etcd-ha-467706
	3057b83de2f13       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      5 minutes ago       Exited              kube-apiserver            2                   03c3df56890dc       kube-apiserver-ha-467706
	93e18e5085cb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   1b57101a3681c       busybox-fc5497c4f-r65s7
	143bf06c19825       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   0836a6cd9f827       coredns-7db6d8ff4d-kcdqn
	56dd0755cda79       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   2887673f339d8       coredns-7db6d8ff4d-56dz8
	fe8aab67cc372       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      15 minutes ago      Exited              kube-proxy                0                   269ac099b43b4       kube-proxy-hd469
	0b4b6b19cdcea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   167b41d6ec7a7       etcd-ha-467706
	7f539c70ed4df       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      16 minutes ago      Exited              kube-scheduler            0                   18f84a94ee364       kube-scheduler-ha-467706
	
	
	==> coredns [143bf06c19825476d6acaa493d723f11171a9d80e30f41df764ec674a91fa2e0] <==
	[INFO] 10.244.0.4:41971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086394s
	[INFO] 10.244.1.2:45052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204533s
	[INFO] 10.244.1.2:56976 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00191173s
	[INFO] 10.244.1.2:48269 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205618s
	[INFO] 10.244.1.2:41050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145556s
	[INFO] 10.244.1.2:40399 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001367129s
	[INFO] 10.244.1.2:34908 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024876s
	[INFO] 10.244.1.2:33490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115098s
	[INFO] 10.244.1.2:43721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162828s
	[INFO] 10.244.2.2:52076 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338786s
	[INFO] 10.244.0.4:58146 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084273s
	[INFO] 10.244.0.4:46620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011163s
	[INFO] 10.244.1.2:55749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161622s
	[INFO] 10.244.1.2:50475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112723s
	[INFO] 10.244.2.2:58296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123831s
	[INFO] 10.244.2.2:42756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149112s
	[INFO] 10.244.2.2:44779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135979s
	[INFO] 10.244.0.4:32859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254227s
	[INFO] 10.244.0.4:39694 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091483s
	[INFO] 10.244.1.2:48582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162571s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [20b5bc576bf50109f1f95724ecc7da24966583c20ca587b933fec546d96bf0a5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1235845291]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:08.587) (total time: 10001ms):
	Trace[1235845291]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:01:18.588)
	Trace[1235845291]: [10.001416671s] [10.001416671s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1124796004]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:10.652) (total time: 11875ms):
	Trace[1124796004]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer 11875ms (19:01:22.528)
	Trace[1124796004]: [11.875864602s] [11.875864602s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [56dd0755cda797074d46524da56506fc00c27c0d63758fdaf9a1ad7c92eff230] <==
	[INFO] 10.244.2.2:44063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012356307s
	[INFO] 10.244.2.2:52058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226126s
	[INFO] 10.244.2.2:45346 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192986s
	[INFO] 10.244.0.4:42980 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001884522s
	[INFO] 10.244.0.4:33643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177169s
	[INFO] 10.244.0.4:55640 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105826s
	[INFO] 10.244.0.4:54019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112453s
	[INFO] 10.244.2.2:41133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144651s
	[INFO] 10.244.2.2:59362 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099749s
	[INFO] 10.244.2.2:32859 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102166s
	[INFO] 10.244.0.4:33356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105006s
	[INFO] 10.244.0.4:56803 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133644s
	[INFO] 10.244.1.2:34244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104241s
	[INFO] 10.244.1.2:43628 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148576s
	[INFO] 10.244.2.2:50718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000190649s
	[INFO] 10.244.0.4:44677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013354s
	[INFO] 10.244.0.4:45227 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159231s
	[INFO] 10.244.1.2:46121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135561s
	[INFO] 10.244.1.2:43459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116038s
	[INFO] 10.244.1.2:34953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088316s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a8435a798cb3296aea069f8b8913f528f74b416684fad962febb801585d51315] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1143499581]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:01:05.981) (total time: 10000ms):
	Trace[1143499581]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:01:15.982)
	Trace[1143499581]: [10.000934854s] [10.000934854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49070->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49070->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49082->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49082->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-467706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T18_50_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:05:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:04:43 +0000   Wed, 17 Apr 2024 19:04:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:04:43 +0000   Wed, 17 Apr 2024 19:04:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:04:43 +0000   Wed, 17 Apr 2024 19:04:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:04:43 +0000   Wed, 17 Apr 2024 19:04:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    ha-467706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3208cc9eadd3453fab86398575c87f4f
	  System UUID:                3208cc9e-add3-453f-ab86-398575c87f4f
	  Boot ID:                    142d9103-8e77-48a0-a260-5d3c6e2e5842
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r65s7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-56dz8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-kcdqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-467706                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-hspjv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-467706             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-467706    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hd469                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-467706             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-467706                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 4m18s              kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Warning  ContainerGCFailed        5m55s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s              node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           4m5s               node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   RegisteredNode           3m4s               node-controller  Node ha-467706 event: Registered Node ha-467706 in Controller
	  Normal   NodeNotReady             105s               node-controller  Node ha-467706 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     76s (x2 over 15m)  kubelet          Node ha-467706 status is now: NodeHasSufficientPID
	  Normal   NodeReady                76s (x2 over 15m)  kubelet          Node ha-467706 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    76s (x2 over 15m)  kubelet          Node ha-467706 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  76s (x2 over 15m)  kubelet          Node ha-467706 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-467706-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_51_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:51:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:05:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:04:36 +0000   Wed, 17 Apr 2024 19:04:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:04:36 +0000   Wed, 17 Apr 2024 19:04:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:04:36 +0000   Wed, 17 Apr 2024 19:04:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:04:36 +0000   Wed, 17 Apr 2024 19:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.236
	  Hostname:    ha-467706-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f49a89e0b0d7432fa507fd1ad108778d
	  System UUID:                f49a89e0-b0d7-432f-a507-fd1ad108778d
	  Boot ID:                    c9888712-c2eb-47f0-864e-09a7afab5132
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xg855                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-467706-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-k6b9s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-467706-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-467706-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-qxtf4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-467706-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-467706-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-467706-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node ha-467706-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node ha-467706-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  RegisteredNode           3m4s                   node-controller  Node ha-467706-m02 event: Registered Node ha-467706-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-467706-m02 status is now: NodeNotReady
	
	
	Name:               ha-467706-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-467706-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=ha-467706
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T18_53_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:53:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-467706-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:03:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Apr 2024 19:03:10 +0000   Wed, 17 Apr 2024 19:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-467706-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd00fe12f3b54ed0af7c6ee4cc75cc20
	  System UUID:                dd00fe12-f3b5-4ed0-af7c-6ee4cc75cc20
	  Boot ID:                    af632482-8681-49b3-9b27-9fe4b73b9f20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dps8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-v8r5k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-c7znr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-467706-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Normal   RegisteredNode           3m4s                   node-controller  Node ha-467706-m04 event: Registered Node ha-467706-m04 in Controller
	  Warning  Rebooted                 2m49s (x3 over 2m49s)  kubelet          Node ha-467706-m04 has been rebooted, boot id: af632482-8681-49b3-9b27-9fe4b73b9f20
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m49s (x4 over 2m49s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x4 over 2m49s)  kubelet          Node ha-467706-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x4 over 2m49s)  kubelet          Node ha-467706-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m49s                  kubelet          Node ha-467706-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-467706-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m31s)   node-controller  Node ha-467706-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063801] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068254] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.163928] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151596] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.299739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.466528] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062588] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.060993] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.990848] kauditd_printk_skb: 62 callbacks suppressed
	[Apr17 18:50] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.084981] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.627165] kauditd_printk_skb: 21 callbacks suppressed
	[Apr17 18:51] kauditd_printk_skb: 72 callbacks suppressed
	[Apr17 18:57] kauditd_printk_skb: 1 callbacks suppressed
	[Apr17 19:00] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.150432] systemd-fstab-generator[3867]: Ignoring "noauto" option for root device
	[  +0.187265] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.155375] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[  +0.300161] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[  +0.869428] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +6.236124] kauditd_printk_skb: 122 callbacks suppressed
	[Apr17 19:01] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.066335] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.090608] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.303219] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0b4b6b19cdceaaa9c5e07ca749ed05ad4ef54e42a70f972986b9bac17eee3a04] <==
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.639996Z","time spent":"7.629506381s","remote":"127.0.0.1:57240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269516Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.652689Z","time spent":"7.616824003s","remote":"127.0.0.1:57194","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-17T18:59:17.269528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:59:09.645049Z","time spent":"7.624473999s","remote":"127.0.0.1:54250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	2024/04/17 18:59:17 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-17T18:59:17.31768Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f0ef8018a32f46af","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-17T18:59:17.317958Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318007Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318064Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318256Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318312Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318375Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318409Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9a927b09686f3923"}
	{"level":"info","ts":"2024-04-17T18:59:17.318433Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318494Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.31857Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318617Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318664Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.318692Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T18:59:17.321891Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-04-17T18:59:17.322057Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2024-04-17T18:59:17.322165Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-467706","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"]}
	
	
	==> etcd [d747907756c6b32be6d0aae467f2824ee0ca38708ee4f9bae2a951d9551b1584] <==
	{"level":"info","ts":"2024-04-17T19:02:38.792887Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:38.802354Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:02:43.610359Z","caller":"traceutil/trace.go:171","msg":"trace[188765169] transaction","detail":"{read_only:false; response_revision:2313; number_of_response:1; }","duration":"114.057089ms","start":"2024-04-17T19:02:43.49628Z","end":"2024-04-17T19:02:43.610337Z","steps":["trace[188765169] 'process raft request'  (duration: 113.890172ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:03:24.821191Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.250:60290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-17T19:03:24.83598Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.250:60306","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-17T19:03:24.851946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(11138100108777699619 17361235931841906351)"}
	{"level":"info","ts":"2024-04-17T19:03:24.855065Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","removed-remote-peer-id":"de2aef77ed8335d","removed-remote-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-04-17T19:03:24.855454Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.855798Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:03:24.855882Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.856233Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:03:24.856293Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.856351Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"f0ef8018a32f46af","removed-member-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.856401Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-04-17T19:03:24.856678Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.856957Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d","error":"context canceled"}
	{"level":"warn","ts":"2024-04-17T19:03:24.857043Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"de2aef77ed8335d","error":"failed to read de2aef77ed8335d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-17T19:03:24.857172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.857468Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-04-17T19:03:24.85758Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f0ef8018a32f46af","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:03:24.857627Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:03:24.857664Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"f0ef8018a32f46af","removed-remote-peer-id":"de2aef77ed8335d"}
	{"level":"info","ts":"2024-04-17T19:03:24.85774Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"f0ef8018a32f46af","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.876804Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"f0ef8018a32f46af","remote-peer-id-stream-handler":"f0ef8018a32f46af","remote-peer-id-from":"de2aef77ed8335d"}
	{"level":"warn","ts":"2024-04-17T19:03:24.878357Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"f0ef8018a32f46af","remote-peer-id-stream-handler":"f0ef8018a32f46af","remote-peer-id-from":"de2aef77ed8335d"}
	
	
	==> kernel <==
	 19:05:59 up 16 min,  0 users,  load average: 0.47, 0.66, 0.41
	Linux ha-467706 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aaaccc1b9eff456df3176a8988d9d9f7bc9c8408c5e1c71786f0bee2a13a5fa6] <==
	I0417 19:00:58.117002       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0417 19:01:01.025231       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0417 19:01:04.096947       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0417 19:01:15.104581       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0417 19:01:22.527869       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.106:56742->10.96.0.1:443: read: connection reset by peer
	I0417 19:01:25.528614       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [f516a8700689c226dc6a5fe22911b347d20130d58e82326ad9bb4c599942ff28] <==
	I0417 19:05:15.689205       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:05:25.695569       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:05:25.695702       1 main.go:227] handling current node
	I0417 19:05:25.695736       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:05:25.695830       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:05:25.696037       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:05:25.696196       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:05:35.702615       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:05:35.702639       1 main.go:227] handling current node
	I0417 19:05:35.702653       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:05:35.702658       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:05:35.702774       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:05:35.702779       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:05:45.716739       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:05:45.716793       1 main.go:227] handling current node
	I0417 19:05:45.716809       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:05:45.716815       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:05:45.716943       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:05:45.716977       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	I0417 19:05:55.726788       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0417 19:05:55.726917       1 main.go:227] handling current node
	I0417 19:05:55.727012       1 main.go:223] Handling node with IPs: map[192.168.39.236:{}]
	I0417 19:05:55.727062       1 main.go:250] Node ha-467706-m02 has CIDR [10.244.1.0/24] 
	I0417 19:05:55.727404       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0417 19:05:55.727536       1 main.go:250] Node ha-467706-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [01db93220056ccd1124f03c87d73db55c8c00405faa947acbfff0b4dcef6cad7] <==
	I0417 19:01:40.302813       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:01:40.362205       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:01:40.370399       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:01:40.370506       1 policy_source.go:224] refreshing policies
	I0417 19:01:40.379876       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 19:01:40.380481       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:01:40.380632       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0417 19:01:40.380661       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:01:40.381351       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:01:40.386704       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:01:40.391317       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:01:40.395290       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:01:40.395335       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:01:40.395352       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:01:40.395357       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:01:40.395362       1 cache.go:39] Caches are synced for autoregister controller
	W0417 19:01:40.398554       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.236 192.168.39.250]
	I0417 19:01:40.399780       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:01:40.402764       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 19:01:40.406966       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0417 19:01:40.416425       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0417 19:01:41.327366       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0417 19:01:42.660827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.236 192.168.39.250]
	W0417 19:01:52.644647       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.236]
	W0417 19:03:42.642813       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.159 192.168.39.236]
	
	
	==> kube-apiserver [3057b83de2f138130b1681dfb55fbb8d335de4ef92b9f4c26c3fe6b70792b505] <==
	I0417 19:00:58.074778       1 options.go:221] external host was not specified, using 192.168.39.159
	I0417 19:00:58.081032       1 server.go:148] Version: v1.30.0-rc.2
	I0417 19:00:58.081194       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:00:58.842384       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0417 19:00:58.855810       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0417 19:00:58.856895       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0417 19:00:58.859962       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:00:58.860191       1 instance.go:299] Using reconciler: lease
	W0417 19:01:18.839034       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0417 19:01:18.839271       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0417 19:01:18.861671       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3a5559b8f0836c77487de3681ed6d3eb23319ae071858bbbef29e169087cc9e0] <==
	I0417 19:00:58.776058       1 serving.go:380] Generated self-signed cert in-memory
	I0417 19:00:59.297601       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.2"
	I0417 19:00:59.297687       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:00:59.300560       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0417 19:00:59.301891       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:00:59.302000       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:00:59.302350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0417 19:01:19.868039       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.159:8443/healthz\": dial tcp 192.168.39.159:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8d919d90d622c9429b850fe063c3906242b949dca9668a880230541c6fe301a1] <==
	I0417 19:04:14.245517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.059349ms"
	I0417 19:04:14.246159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.873µs"
	I0417 19:04:14.381442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.658076ms"
	I0417 19:04:14.381592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.678µs"
	E0417 19:04:14.403828       1 gc_controller.go:153] "Failed to get node" err="node \"ha-467706-m03\" not found" logger="pod-garbage-collector-controller" node="ha-467706-m03"
	E0417 19:04:14.403909       1 gc_controller.go:153] "Failed to get node" err="node \"ha-467706-m03\" not found" logger="pod-garbage-collector-controller" node="ha-467706-m03"
	E0417 19:04:14.403936       1 gc_controller.go:153] "Failed to get node" err="node \"ha-467706-m03\" not found" logger="pod-garbage-collector-controller" node="ha-467706-m03"
	E0417 19:04:14.403959       1 gc_controller.go:153] "Failed to get node" err="node \"ha-467706-m03\" not found" logger="pod-garbage-collector-controller" node="ha-467706-m03"
	E0417 19:04:14.403982       1 gc_controller.go:153] "Failed to get node" err="node \"ha-467706-m03\" not found" logger="pod-garbage-collector-controller" node="ha-467706-m03"
	I0417 19:04:14.421478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.039434ms"
	I0417 19:04:14.423429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.119µs"
	I0417 19:04:19.489554       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0417 19:04:19.609447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.018563ms"
	I0417 19:04:19.609936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.469µs"
	I0417 19:04:35.304482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.275043ms"
	I0417 19:04:35.304607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.676µs"
	I0417 19:04:39.529819       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0417 19:04:44.520434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.078798ms"
	I0417 19:04:44.520588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.714µs"
	I0417 19:04:44.568074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.514077ms"
	I0417 19:04:44.568433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.179µs"
	I0417 19:04:44.592025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.142619ms"
	I0417 19:04:44.592240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.687µs"
	I0417 19:04:44.639455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.441608ms"
	I0417 19:04:44.639652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.284µs"
	
	
	==> kube-proxy [8629e94f9d56f41f415631d504dfc52decd55bfebb1cc9eadb4a1fdc0a1f0b94] <==
	I0417 19:00:59.327365       1 server_linux.go:69] "Using iptables proxy"
	E0417 19:01:00.256828       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:03.328978       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:06.400738       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:12.543823       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0417 19:01:24.831694       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0417 19:01:41.256405       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	I0417 19:01:41.422268       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:01:41.422373       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:01:41.422410       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:01:41.431466       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:01:41.431783       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:01:41.431830       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:01:41.453239       1 config.go:192] "Starting service config controller"
	I0417 19:01:41.453295       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:01:41.453331       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:01:41.453336       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:01:41.472778       1 config.go:319] "Starting node config controller"
	I0417 19:01:41.472838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:01:41.555184       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:01:41.555259       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:01:41.580546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fe8aab67cc3728c03cc31fe03f8dcc41eea7026f159c3e2994435fdcef199ae1] <==
	E0417 18:58:11.297686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:14.369905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:14.369985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:14.370189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:14.370413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:17.441076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:17.441345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:20.511694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:20.511840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:20.511923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:20.511967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:23.585005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:23.585151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:29.727565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:29.728063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:35.872844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:35.873036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:35.873557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:35.873617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:51.231559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:51.231633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1834": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:58:54.303628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:58:54.304252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	W0417 18:59:00.453520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	E0417 18:59:00.461295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-467706&resourceVersion=1832": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [7f539c70ed4df5c369ed8cb3c4de4844ed7e03feafa36a10437c0946f4ca3d9c] <==
	E0417 18:59:08.953701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:08.982606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 18:59:08.983240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 18:59:09.004914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:59:09.004981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 18:59:09.297707       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 18:59:09.297774       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 18:59:09.594672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 18:59:09.594723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 18:59:09.832804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 18:59:09.833015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 18:59:10.722056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:10.722176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:10.841546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:10.841770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:11.280417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 18:59:11.280518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 18:59:15.518672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 18:59:15.518789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 18:59:17.228856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 18:59:17.228892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0417 18:59:17.229649       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0417 18:59:17.229795       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0417 18:59:17.248416       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0417 18:59:17.248620       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d6429c5525ff3dc5d16f76df3fb2c2f654a91dbcc391c35f67c67e5da612aeaf] <==
	W0417 19:01:37.159622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.159791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.328669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.328754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.469535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.469668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:37.823051       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:37.823211       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.051573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.159:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.051659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.159:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.322970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.323042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.336579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.336652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:38.452937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	E0417 19:01:38.453019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.159:8443: connect: connection refused
	W0417 19:01:40.319458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:01:40.320071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:01:40.319805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:01:40.320381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0417 19:01:58.978678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0417 19:03:21.543708       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dps8q\": pod busybox-fc5497c4f-dps8q is already assigned to node \"ha-467706-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dps8q" node="ha-467706-m04"
	E0417 19:03:21.545711       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 303b3c70-e6bc-4e42-848c-e1ecf263c283(default/busybox-fc5497c4f-dps8q) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-dps8q"
	E0417 19:03:21.545874       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dps8q\": pod busybox-fc5497c4f-dps8q is already assigned to node \"ha-467706-m04\"" pod="default/busybox-fc5497c4f-dps8q"
	I0417 19:03:21.545957       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dps8q" node="ha-467706-m04"
	
	
	==> kubelet <==
	Apr 17 19:04:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 19:04:13 ha-467706 kubelet[1377]: E0417 19:04:13.787461    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-467706\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:13 ha-467706 kubelet[1377]: E0417 19:04:13.871061    1377 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:23 ha-467706 kubelet[1377]: E0417 19:04:23.789152    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-467706\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:23 ha-467706 kubelet[1377]: E0417 19:04:23.871964    1377 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:33 ha-467706 kubelet[1377]: E0417 19:04:33.789782    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-467706\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:33 ha-467706 kubelet[1377]: E0417 19:04:33.789833    1377 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 17 19:04:33 ha-467706 kubelet[1377]: E0417 19:04:33.873134    1377 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-467706?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 17 19:04:33 ha-467706 kubelet[1377]: I0417 19:04:33.873294    1377 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291227    1377 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: I0417 19:04:35.291261    1377 status_manager.go:853] "Failed to get status for pod" podUID="b5a737ba-33c0-4c0d-ab14-fe98f2c6e903" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": http2: client connection lost"
	Apr 17 19:04:35 ha-467706 kubelet[1377]: E0417 19:04:35.291665    1377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-467706?timeout=10s\": http2: client connection lost" interval="200ms"
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291321    1377 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291342    1377 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291360    1377 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291374    1377 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291390    1377 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291403    1377 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291432    1377 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:04:35 ha-467706 kubelet[1377]: W0417 19:04:35.291464    1377 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Apr 17 19:05:04 ha-467706 kubelet[1377]: E0417 19:05:04.306018    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:05:04 ha-467706 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:05:04 ha-467706 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:05:04 ha-467706 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:05:04 ha-467706 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:05:58.709858  103529 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18665-75973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-467706 -n ha-467706
helpers_test.go:261: (dbg) Run:  kubectl --context ha-467706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990943
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-990943
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-990943: exit status 82 (2m2.714357477s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-990943-m03"  ...
	* Stopping node "multinode-990943-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-990943" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990943 --wait=true -v=8 --alsologtostderr
E0417 19:23:19.320840   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990943 --wait=true -v=8 --alsologtostderr: (3m1.829246196s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990943
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-990943 -n multinode-990943
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-990943 logs -n 25: (1.6274825s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943:/home/docker/cp-test_multinode-990943-m02_multinode-990943.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943 sudo cat                                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m02_multinode-990943.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03:/home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943-m03 sudo cat                                   | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp testdata/cp-test.txt                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943:/home/docker/cp-test_multinode-990943-m03_multinode-990943.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943 sudo cat                                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02:/home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943-m02 sudo cat                                   | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-990943 node stop m03                                                          | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	| node    | multinode-990943 node start                                                             | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:21 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| stop    | -p multinode-990943                                                                     | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| start   | -p multinode-990943                                                                     | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:23 UTC | 17 Apr 24 19:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:23:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:23:16.013049  112272 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:23:16.013309  112272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:23:16.013319  112272 out.go:304] Setting ErrFile to fd 2...
	I0417 19:23:16.013323  112272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:23:16.013486  112272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:23:16.014005  112272 out.go:298] Setting JSON to false
	I0417 19:23:16.014928  112272 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11144,"bootTime":1713370652,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:23:16.014988  112272 start.go:139] virtualization: kvm guest
	I0417 19:23:16.017329  112272 out.go:177] * [multinode-990943] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:23:16.018808  112272 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:23:16.020121  112272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:23:16.018864  112272 notify.go:220] Checking for updates...
	I0417 19:23:16.022728  112272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:23:16.024401  112272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:23:16.025936  112272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:23:16.027308  112272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:23:16.029097  112272 config.go:182] Loaded profile config "multinode-990943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:23:16.029290  112272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:23:16.029701  112272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:23:16.029785  112272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:23:16.044805  112272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I0417 19:23:16.045172  112272 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:23:16.045726  112272 main.go:141] libmachine: Using API Version  1
	I0417 19:23:16.045748  112272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:23:16.046102  112272 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:23:16.046301  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.082402  112272 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 19:23:16.083792  112272 start.go:297] selected driver: kvm2
	I0417 19:23:16.083808  112272 start.go:901] validating driver "kvm2" against &{Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:23:16.084097  112272 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:23:16.084549  112272 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:23:16.084632  112272 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:23:16.099289  112272 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:23:16.099909  112272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:23:16.100000  112272 cni.go:84] Creating CNI manager for ""
	I0417 19:23:16.100013  112272 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0417 19:23:16.100060  112272 start.go:340] cluster config:
	{Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:23:16.100188  112272 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:23:16.102002  112272 out.go:177] * Starting "multinode-990943" primary control-plane node in "multinode-990943" cluster
	I0417 19:23:16.103453  112272 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:23:16.103491  112272 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:23:16.103502  112272 cache.go:56] Caching tarball of preloaded images
	I0417 19:23:16.103568  112272 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:23:16.103578  112272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:23:16.103701  112272 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/config.json ...
	I0417 19:23:16.103898  112272 start.go:360] acquireMachinesLock for multinode-990943: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:23:16.103943  112272 start.go:364] duration metric: took 24.946µs to acquireMachinesLock for "multinode-990943"
	I0417 19:23:16.103962  112272 start.go:96] Skipping create...Using existing machine configuration
	I0417 19:23:16.103977  112272 fix.go:54] fixHost starting: 
	I0417 19:23:16.104231  112272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:23:16.104268  112272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:23:16.118542  112272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0417 19:23:16.118955  112272 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:23:16.119492  112272 main.go:141] libmachine: Using API Version  1
	I0417 19:23:16.119518  112272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:23:16.119857  112272 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:23:16.120038  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.120215  112272 main.go:141] libmachine: (multinode-990943) Calling .GetState
	I0417 19:23:16.121901  112272 fix.go:112] recreateIfNeeded on multinode-990943: state=Running err=<nil>
	W0417 19:23:16.121920  112272 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 19:23:16.123885  112272 out.go:177] * Updating the running kvm2 "multinode-990943" VM ...
	I0417 19:23:16.125142  112272 machine.go:94] provisionDockerMachine start ...
	I0417 19:23:16.125162  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.125364  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.127859  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.128287  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.128316  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.128418  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.128591  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.128747  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.128960  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.129142  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.129325  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.129337  112272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:23:16.246359  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-990943
	
	I0417 19:23:16.246396  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.246678  112272 buildroot.go:166] provisioning hostname "multinode-990943"
	I0417 19:23:16.246712  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.246960  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.249583  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.249915  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.249939  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.250136  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.250353  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.250513  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.250667  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.250846  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.251034  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.251049  112272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-990943 && echo "multinode-990943" | sudo tee /etc/hostname
	I0417 19:23:16.373825  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-990943
	
	I0417 19:23:16.373858  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.376629  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.376937  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.376968  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.377175  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.377406  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.377572  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.377720  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.377892  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.378071  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.378087  112272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-990943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-990943/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-990943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:23:16.486218  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:23:16.486247  112272 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 19:23:16.486292  112272 buildroot.go:174] setting up certificates
	I0417 19:23:16.486305  112272 provision.go:84] configureAuth start
	I0417 19:23:16.486320  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.486606  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:23:16.489094  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.489471  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.489493  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.489671  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.492219  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.492571  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.492599  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.492721  112272 provision.go:143] copyHostCerts
	I0417 19:23:16.492757  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:23:16.492814  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 19:23:16.492839  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:23:16.492905  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 19:23:16.492985  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:23:16.493007  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 19:23:16.493016  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:23:16.493054  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 19:23:16.493112  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:23:16.493137  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 19:23:16.493146  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:23:16.493180  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 19:23:16.493249  112272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.multinode-990943 san=[127.0.0.1 192.168.39.106 localhost minikube multinode-990943]
	I0417 19:23:16.677122  112272 provision.go:177] copyRemoteCerts
	I0417 19:23:16.677181  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:23:16.677206  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.679696  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.680073  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.680101  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.680304  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.680514  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.680706  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.680878  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:23:16.763221  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 19:23:16.763304  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:23:16.790257  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 19:23:16.790332  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0417 19:23:16.818776  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 19:23:16.818848  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 19:23:16.844481  112272 provision.go:87] duration metric: took 358.157175ms to configureAuth
	I0417 19:23:16.844512  112272 buildroot.go:189] setting minikube options for container-runtime
	I0417 19:23:16.844723  112272 config.go:182] Loaded profile config "multinode-990943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:23:16.844834  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.847570  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.847940  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.847971  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.848216  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.848402  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.848577  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.848753  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.848947  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.849115  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.849130  112272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:24:47.770117  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:24:47.770159  112272 machine.go:97] duration metric: took 1m31.64500083s to provisionDockerMachine
	I0417 19:24:47.770172  112272 start.go:293] postStartSetup for "multinode-990943" (driver="kvm2")
	I0417 19:24:47.770188  112272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:24:47.770212  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:47.770565  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:24:47.770623  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:47.774060  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.774556  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:47.774585  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.774713  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:47.774952  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.775092  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:47.775260  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:47.860865  112272 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:24:47.865730  112272 command_runner.go:130] > NAME=Buildroot
	I0417 19:24:47.865749  112272 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0417 19:24:47.865753  112272 command_runner.go:130] > ID=buildroot
	I0417 19:24:47.865758  112272 command_runner.go:130] > VERSION_ID=2023.02.9
	I0417 19:24:47.865763  112272 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0417 19:24:47.865823  112272 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:24:47.865842  112272 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:24:47.865912  112272 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:24:47.866001  112272 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:24:47.866012  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 19:24:47.866109  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:24:47.875968  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:24:47.901610  112272 start.go:296] duration metric: took 131.424047ms for postStartSetup
	I0417 19:24:47.901675  112272 fix.go:56] duration metric: took 1m31.797703582s for fixHost
	I0417 19:24:47.901699  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:47.904261  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.904625  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:47.904649  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.904851  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:47.905072  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.905259  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.905446  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:47.905605  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:24:47.905812  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:24:47.905824  112272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 19:24:48.005916  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713381887.986837354
	
	I0417 19:24:48.005942  112272 fix.go:216] guest clock: 1713381887.986837354
	I0417 19:24:48.005971  112272 fix.go:229] Guest: 2024-04-17 19:24:47.986837354 +0000 UTC Remote: 2024-04-17 19:24:47.901680293 +0000 UTC m=+91.942202216 (delta=85.157061ms)
	I0417 19:24:48.006002  112272 fix.go:200] guest clock delta is within tolerance: 85.157061ms
	I0417 19:24:48.006011  112272 start.go:83] releasing machines lock for "multinode-990943", held for 1m31.902056736s
	I0417 19:24:48.006037  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.006333  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:24:48.009221  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.009608  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.009636  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.009776  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010410  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010611  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010710  112272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:24:48.010751  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:48.010807  112272 ssh_runner.go:195] Run: cat /version.json
	I0417 19:24:48.010833  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:48.013164  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013445  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013552  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.013592  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013711  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:48.013845  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.013868  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013878  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:48.014028  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:48.014041  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:48.014232  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:48.014253  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:48.014388  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:48.014508  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:48.115278  112272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0417 19:24:48.116125  112272 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0417 19:24:48.116292  112272 ssh_runner.go:195] Run: systemctl --version
	I0417 19:24:48.122579  112272 command_runner.go:130] > systemd 252 (252)
	I0417 19:24:48.122644  112272 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0417 19:24:48.122712  112272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:24:48.290301  112272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0417 19:24:48.298812  112272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0417 19:24:48.298864  112272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:24:48.298911  112272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:24:48.309592  112272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0417 19:24:48.309617  112272 start.go:494] detecting cgroup driver to use...
	I0417 19:24:48.309699  112272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:24:48.325938  112272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:24:48.340430  112272 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:24:48.340495  112272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:24:48.354181  112272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:24:48.368319  112272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:24:48.514780  112272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:24:48.658227  112272 docker.go:233] disabling docker service ...
	I0417 19:24:48.658316  112272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:24:48.674994  112272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:24:48.689366  112272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:24:48.838279  112272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:24:48.993002  112272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:24:49.007941  112272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:24:49.027551  112272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0417 19:24:49.027612  112272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:24:49.027674  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.039251  112272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:24:49.039339  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.050742  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.061934  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.074547  112272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:24:49.087714  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.100297  112272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.112513  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.125177  112272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:24:49.136481  112272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0417 19:24:49.136748  112272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:24:49.148306  112272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:24:49.296369  112272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:24:50.780113  112272 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.483693227s)
	I0417 19:24:50.780166  112272 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:24:50.780227  112272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:24:50.785569  112272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0417 19:24:50.785591  112272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0417 19:24:50.785600  112272 command_runner.go:130] > Device: 0,22	Inode: 1330        Links: 1
	I0417 19:24:50.785623  112272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0417 19:24:50.785636  112272 command_runner.go:130] > Access: 2024-04-17 19:24:50.655242829 +0000
	I0417 19:24:50.785644  112272 command_runner.go:130] > Modify: 2024-04-17 19:24:50.649242696 +0000
	I0417 19:24:50.785652  112272 command_runner.go:130] > Change: 2024-04-17 19:24:50.649242696 +0000
	I0417 19:24:50.785662  112272 command_runner.go:130] >  Birth: -
	I0417 19:24:50.785754  112272 start.go:562] Will wait 60s for crictl version
	I0417 19:24:50.785812  112272 ssh_runner.go:195] Run: which crictl
	I0417 19:24:50.789820  112272 command_runner.go:130] > /usr/bin/crictl
	I0417 19:24:50.789991  112272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:24:50.835756  112272 command_runner.go:130] > Version:  0.1.0
	I0417 19:24:50.835781  112272 command_runner.go:130] > RuntimeName:  cri-o
	I0417 19:24:50.835786  112272 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0417 19:24:50.835791  112272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0417 19:24:50.837170  112272 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:24:50.837277  112272 ssh_runner.go:195] Run: crio --version
	I0417 19:24:50.873531  112272 command_runner.go:130] > crio version 1.29.1
	I0417 19:24:50.873557  112272 command_runner.go:130] > Version:        1.29.1
	I0417 19:24:50.873563  112272 command_runner.go:130] > GitCommit:      unknown
	I0417 19:24:50.873567  112272 command_runner.go:130] > GitCommitDate:  unknown
	I0417 19:24:50.873571  112272 command_runner.go:130] > GitTreeState:   clean
	I0417 19:24:50.873577  112272 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0417 19:24:50.873581  112272 command_runner.go:130] > GoVersion:      go1.21.6
	I0417 19:24:50.873585  112272 command_runner.go:130] > Compiler:       gc
	I0417 19:24:50.873590  112272 command_runner.go:130] > Platform:       linux/amd64
	I0417 19:24:50.873594  112272 command_runner.go:130] > Linkmode:       dynamic
	I0417 19:24:50.873598  112272 command_runner.go:130] > BuildTags:      
	I0417 19:24:50.873603  112272 command_runner.go:130] >   containers_image_ostree_stub
	I0417 19:24:50.873606  112272 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0417 19:24:50.873610  112272 command_runner.go:130] >   btrfs_noversion
	I0417 19:24:50.873614  112272 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0417 19:24:50.873618  112272 command_runner.go:130] >   libdm_no_deferred_remove
	I0417 19:24:50.873621  112272 command_runner.go:130] >   seccomp
	I0417 19:24:50.873624  112272 command_runner.go:130] > LDFlags:          unknown
	I0417 19:24:50.873628  112272 command_runner.go:130] > SeccompEnabled:   true
	I0417 19:24:50.873632  112272 command_runner.go:130] > AppArmorEnabled:  false
	I0417 19:24:50.873697  112272 ssh_runner.go:195] Run: crio --version
	I0417 19:24:50.902409  112272 command_runner.go:130] > crio version 1.29.1
	I0417 19:24:50.902465  112272 command_runner.go:130] > Version:        1.29.1
	I0417 19:24:50.902475  112272 command_runner.go:130] > GitCommit:      unknown
	I0417 19:24:50.902482  112272 command_runner.go:130] > GitCommitDate:  unknown
	I0417 19:24:50.902488  112272 command_runner.go:130] > GitTreeState:   clean
	I0417 19:24:50.902497  112272 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0417 19:24:50.902503  112272 command_runner.go:130] > GoVersion:      go1.21.6
	I0417 19:24:50.902509  112272 command_runner.go:130] > Compiler:       gc
	I0417 19:24:50.902513  112272 command_runner.go:130] > Platform:       linux/amd64
	I0417 19:24:50.902517  112272 command_runner.go:130] > Linkmode:       dynamic
	I0417 19:24:50.902521  112272 command_runner.go:130] > BuildTags:      
	I0417 19:24:50.902525  112272 command_runner.go:130] >   containers_image_ostree_stub
	I0417 19:24:50.902530  112272 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0417 19:24:50.902534  112272 command_runner.go:130] >   btrfs_noversion
	I0417 19:24:50.902539  112272 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0417 19:24:50.902543  112272 command_runner.go:130] >   libdm_no_deferred_remove
	I0417 19:24:50.902547  112272 command_runner.go:130] >   seccomp
	I0417 19:24:50.902551  112272 command_runner.go:130] > LDFlags:          unknown
	I0417 19:24:50.902555  112272 command_runner.go:130] > SeccompEnabled:   true
	I0417 19:24:50.902559  112272 command_runner.go:130] > AppArmorEnabled:  false
	I0417 19:24:50.908320  112272 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 19:24:50.909929  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:24:50.912518  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:50.912847  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:50.912883  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:50.913077  112272 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:24:50.924057  112272 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0417 19:24:50.924342  112272 kubeadm.go:877] updating cluster {Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:24:50.924514  112272 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:24:50.924581  112272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:24:50.961109  112272 command_runner.go:130] > {
	I0417 19:24:50.961132  112272 command_runner.go:130] >   "images": [
	I0417 19:24:50.961137  112272 command_runner.go:130] >     {
	I0417 19:24:50.961144  112272 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0417 19:24:50.961149  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961154  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0417 19:24:50.961158  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961166  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961174  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0417 19:24:50.961181  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0417 19:24:50.961185  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961189  112272 command_runner.go:130] >       "size": "65291810",
	I0417 19:24:50.961193  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961197  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961207  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961212  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961215  112272 command_runner.go:130] >     },
	I0417 19:24:50.961219  112272 command_runner.go:130] >     {
	I0417 19:24:50.961230  112272 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0417 19:24:50.961238  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961243  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0417 19:24:50.961247  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961250  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961257  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0417 19:24:50.961264  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0417 19:24:50.961270  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961274  112272 command_runner.go:130] >       "size": "1363676",
	I0417 19:24:50.961278  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961284  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961290  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961294  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961297  112272 command_runner.go:130] >     },
	I0417 19:24:50.961301  112272 command_runner.go:130] >     {
	I0417 19:24:50.961309  112272 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0417 19:24:50.961314  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961338  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0417 19:24:50.961342  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961345  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961352  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0417 19:24:50.961362  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0417 19:24:50.961368  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961372  112272 command_runner.go:130] >       "size": "31470524",
	I0417 19:24:50.961376  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961389  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961396  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961400  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961403  112272 command_runner.go:130] >     },
	I0417 19:24:50.961406  112272 command_runner.go:130] >     {
	I0417 19:24:50.961412  112272 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0417 19:24:50.961418  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961427  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0417 19:24:50.961433  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961437  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961447  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0417 19:24:50.961463  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0417 19:24:50.961470  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961475  112272 command_runner.go:130] >       "size": "61245718",
	I0417 19:24:50.961482  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961486  112272 command_runner.go:130] >       "username": "nonroot",
	I0417 19:24:50.961490  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961495  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961500  112272 command_runner.go:130] >     },
	I0417 19:24:50.961504  112272 command_runner.go:130] >     {
	I0417 19:24:50.961512  112272 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0417 19:24:50.961516  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961522  112272 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0417 19:24:50.961525  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961529  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961537  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0417 19:24:50.961544  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0417 19:24:50.961548  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961552  112272 command_runner.go:130] >       "size": "150779692",
	I0417 19:24:50.961557  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961561  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961567  112272 command_runner.go:130] >       },
	I0417 19:24:50.961571  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961575  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961579  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961582  112272 command_runner.go:130] >     },
	I0417 19:24:50.961585  112272 command_runner.go:130] >     {
	I0417 19:24:50.961598  112272 command_runner.go:130] >       "id": "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1",
	I0417 19:24:50.961604  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961609  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0-rc.2"
	I0417 19:24:50.961615  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961619  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961627  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053",
	I0417 19:24:50.961640  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e"
	I0417 19:24:50.961646  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961650  112272 command_runner.go:130] >       "size": "117609952",
	I0417 19:24:50.961653  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961657  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961661  112272 command_runner.go:130] >       },
	I0417 19:24:50.961664  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961668  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961672  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961675  112272 command_runner.go:130] >     },
	I0417 19:24:50.961678  112272 command_runner.go:130] >     {
	I0417 19:24:50.961684  112272 command_runner.go:130] >       "id": "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b",
	I0417 19:24:50.961690  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961696  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"
	I0417 19:24:50.961701  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961705  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961731  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8",
	I0417 19:24:50.961747  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2"
	I0417 19:24:50.961750  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961754  112272 command_runner.go:130] >       "size": "112170310",
	I0417 19:24:50.961758  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961762  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961770  112272 command_runner.go:130] >       },
	I0417 19:24:50.961776  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961783  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961789  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961793  112272 command_runner.go:130] >     },
	I0417 19:24:50.961796  112272 command_runner.go:130] >     {
	I0417 19:24:50.961805  112272 command_runner.go:130] >       "id": "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e",
	I0417 19:24:50.961811  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961817  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0-rc.2"
	I0417 19:24:50.961821  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961825  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961849  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5",
	I0417 19:24:50.961858  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d"
	I0417 19:24:50.961862  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961870  112272 command_runner.go:130] >       "size": "85932953",
	I0417 19:24:50.961874  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961877  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961881  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961884  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961887  112272 command_runner.go:130] >     },
	I0417 19:24:50.961890  112272 command_runner.go:130] >     {
	I0417 19:24:50.961896  112272 command_runner.go:130] >       "id": "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6",
	I0417 19:24:50.961900  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961904  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0-rc.2"
	I0417 19:24:50.961907  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961911  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961918  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543",
	I0417 19:24:50.961925  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216"
	I0417 19:24:50.961928  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961931  112272 command_runner.go:130] >       "size": "63026500",
	I0417 19:24:50.961935  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961938  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961941  112272 command_runner.go:130] >       },
	I0417 19:24:50.961944  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961948  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961952  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961954  112272 command_runner.go:130] >     },
	I0417 19:24:50.961957  112272 command_runner.go:130] >     {
	I0417 19:24:50.961963  112272 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0417 19:24:50.961967  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961971  112272 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0417 19:24:50.961974  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961978  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961984  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0417 19:24:50.961991  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0417 19:24:50.961997  112272 command_runner.go:130] >       ],
	I0417 19:24:50.962001  112272 command_runner.go:130] >       "size": "750414",
	I0417 19:24:50.962004  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.962008  112272 command_runner.go:130] >         "value": "65535"
	I0417 19:24:50.962012  112272 command_runner.go:130] >       },
	I0417 19:24:50.962021  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.962027  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.962030  112272 command_runner.go:130] >       "pinned": true
	I0417 19:24:50.962036  112272 command_runner.go:130] >     }
	I0417 19:24:50.962042  112272 command_runner.go:130] >   ]
	I0417 19:24:50.962045  112272 command_runner.go:130] > }
	I0417 19:24:50.962799  112272 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:24:50.962814  112272 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:24:50.962861  112272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:24:50.996360  112272 command_runner.go:130] > {
	I0417 19:24:50.996385  112272 command_runner.go:130] >   "images": [
	I0417 19:24:50.996390  112272 command_runner.go:130] >     {
	I0417 19:24:50.996398  112272 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0417 19:24:50.996403  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996408  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0417 19:24:50.996411  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996416  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996424  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0417 19:24:50.996430  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0417 19:24:50.996434  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996438  112272 command_runner.go:130] >       "size": "65291810",
	I0417 19:24:50.996441  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996445  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996451  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996455  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996458  112272 command_runner.go:130] >     },
	I0417 19:24:50.996461  112272 command_runner.go:130] >     {
	I0417 19:24:50.996467  112272 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0417 19:24:50.996472  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996481  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0417 19:24:50.996487  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996491  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996498  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0417 19:24:50.996505  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0417 19:24:50.996510  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996514  112272 command_runner.go:130] >       "size": "1363676",
	I0417 19:24:50.996518  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996531  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996541  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996545  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996548  112272 command_runner.go:130] >     },
	I0417 19:24:50.996557  112272 command_runner.go:130] >     {
	I0417 19:24:50.996566  112272 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0417 19:24:50.996594  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996601  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0417 19:24:50.996604  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996608  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996616  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0417 19:24:50.996625  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0417 19:24:50.996629  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996633  112272 command_runner.go:130] >       "size": "31470524",
	I0417 19:24:50.996637  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996641  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996647  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996655  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996659  112272 command_runner.go:130] >     },
	I0417 19:24:50.996662  112272 command_runner.go:130] >     {
	I0417 19:24:50.996668  112272 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0417 19:24:50.996674  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996678  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0417 19:24:50.996682  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996686  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996693  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0417 19:24:50.996710  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0417 19:24:50.996717  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996721  112272 command_runner.go:130] >       "size": "61245718",
	I0417 19:24:50.996724  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996729  112272 command_runner.go:130] >       "username": "nonroot",
	I0417 19:24:50.996735  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996739  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996742  112272 command_runner.go:130] >     },
	I0417 19:24:50.996746  112272 command_runner.go:130] >     {
	I0417 19:24:50.996751  112272 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0417 19:24:50.996761  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996778  112272 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0417 19:24:50.996782  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996786  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996801  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0417 19:24:50.996813  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0417 19:24:50.996821  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996828  112272 command_runner.go:130] >       "size": "150779692",
	I0417 19:24:50.996836  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.996842  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.996849  112272 command_runner.go:130] >       },
	I0417 19:24:50.996853  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996860  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996864  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996870  112272 command_runner.go:130] >     },
	I0417 19:24:50.996873  112272 command_runner.go:130] >     {
	I0417 19:24:50.996879  112272 command_runner.go:130] >       "id": "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1",
	I0417 19:24:50.996885  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996890  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0-rc.2"
	I0417 19:24:50.996896  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996900  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996907  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053",
	I0417 19:24:50.996916  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e"
	I0417 19:24:50.996919  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996923  112272 command_runner.go:130] >       "size": "117609952",
	I0417 19:24:50.996928  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.996931  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.996935  112272 command_runner.go:130] >       },
	I0417 19:24:50.996938  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996942  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996946  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996949  112272 command_runner.go:130] >     },
	I0417 19:24:50.996957  112272 command_runner.go:130] >     {
	I0417 19:24:50.996965  112272 command_runner.go:130] >       "id": "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b",
	I0417 19:24:50.996969  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996977  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"
	I0417 19:24:50.996987  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996997  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997005  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8",
	I0417 19:24:50.997015  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2"
	I0417 19:24:50.997022  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997025  112272 command_runner.go:130] >       "size": "112170310",
	I0417 19:24:50.997029  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997033  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.997037  112272 command_runner.go:130] >       },
	I0417 19:24:50.997040  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997044  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997048  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997051  112272 command_runner.go:130] >     },
	I0417 19:24:50.997055  112272 command_runner.go:130] >     {
	I0417 19:24:50.997061  112272 command_runner.go:130] >       "id": "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e",
	I0417 19:24:50.997067  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997072  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0-rc.2"
	I0417 19:24:50.997077  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997081  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997101  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5",
	I0417 19:24:50.997115  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d"
	I0417 19:24:50.997118  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997121  112272 command_runner.go:130] >       "size": "85932953",
	I0417 19:24:50.997125  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.997128  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997132  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997136  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997139  112272 command_runner.go:130] >     },
	I0417 19:24:50.997142  112272 command_runner.go:130] >     {
	I0417 19:24:50.997150  112272 command_runner.go:130] >       "id": "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6",
	I0417 19:24:50.997156  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997161  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0-rc.2"
	I0417 19:24:50.997167  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997171  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997179  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543",
	I0417 19:24:50.997186  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216"
	I0417 19:24:50.997197  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997203  112272 command_runner.go:130] >       "size": "63026500",
	I0417 19:24:50.997207  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997211  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.997214  112272 command_runner.go:130] >       },
	I0417 19:24:50.997217  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997221  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997225  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997228  112272 command_runner.go:130] >     },
	I0417 19:24:50.997232  112272 command_runner.go:130] >     {
	I0417 19:24:50.997237  112272 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0417 19:24:50.997244  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997248  112272 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0417 19:24:50.997251  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997255  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997264  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0417 19:24:50.997273  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0417 19:24:50.997276  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997280  112272 command_runner.go:130] >       "size": "750414",
	I0417 19:24:50.997283  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997288  112272 command_runner.go:130] >         "value": "65535"
	I0417 19:24:50.997294  112272 command_runner.go:130] >       },
	I0417 19:24:50.997298  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997301  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997305  112272 command_runner.go:130] >       "pinned": true
	I0417 19:24:50.997309  112272 command_runner.go:130] >     }
	I0417 19:24:50.997312  112272 command_runner.go:130] >   ]
	I0417 19:24:50.997315  112272 command_runner.go:130] > }
	I0417 19:24:50.997978  112272 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:24:50.998014  112272 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:24:50.998036  112272 kubeadm.go:928] updating node { 192.168.39.106 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:24:50.998184  112272 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-990943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:24:50.998278  112272 ssh_runner.go:195] Run: crio config
	I0417 19:24:51.041255  112272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0417 19:24:51.041289  112272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0417 19:24:51.041299  112272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0417 19:24:51.041305  112272 command_runner.go:130] > #
	I0417 19:24:51.041315  112272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0417 19:24:51.041323  112272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0417 19:24:51.041331  112272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0417 19:24:51.041351  112272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0417 19:24:51.041358  112272 command_runner.go:130] > # reload'.
	I0417 19:24:51.041376  112272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0417 19:24:51.041390  112272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0417 19:24:51.041401  112272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0417 19:24:51.041412  112272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0417 19:24:51.041429  112272 command_runner.go:130] > [crio]
	I0417 19:24:51.041440  112272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0417 19:24:51.041451  112272 command_runner.go:130] > # containers images, in this directory.
	I0417 19:24:51.041459  112272 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0417 19:24:51.041476  112272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0417 19:24:51.041487  112272 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0417 19:24:51.041501  112272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0417 19:24:51.041511  112272 command_runner.go:130] > # imagestore = ""
	I0417 19:24:51.041542  112272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0417 19:24:51.041556  112272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0417 19:24:51.041577  112272 command_runner.go:130] > storage_driver = "overlay"
	I0417 19:24:51.041590  112272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0417 19:24:51.041603  112272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0417 19:24:51.041613  112272 command_runner.go:130] > storage_option = [
	I0417 19:24:51.041625  112272 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0417 19:24:51.041633  112272 command_runner.go:130] > ]
	I0417 19:24:51.041645  112272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0417 19:24:51.041658  112272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0417 19:24:51.041667  112272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0417 19:24:51.041680  112272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0417 19:24:51.041693  112272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0417 19:24:51.041704  112272 command_runner.go:130] > # always happen on a node reboot
	I0417 19:24:51.041713  112272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0417 19:24:51.041735  112272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0417 19:24:51.041749  112272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0417 19:24:51.041760  112272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0417 19:24:51.041771  112272 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0417 19:24:51.041789  112272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0417 19:24:51.041806  112272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0417 19:24:51.041822  112272 command_runner.go:130] > # internal_wipe = true
	I0417 19:24:51.041838  112272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0417 19:24:51.041851  112272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0417 19:24:51.041861  112272 command_runner.go:130] > # internal_repair = false
	I0417 19:24:51.041877  112272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0417 19:24:51.041891  112272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0417 19:24:51.041903  112272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0417 19:24:51.041916  112272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0417 19:24:51.041929  112272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0417 19:24:51.041936  112272 command_runner.go:130] > [crio.api]
	I0417 19:24:51.041948  112272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0417 19:24:51.041959  112272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0417 19:24:51.041969  112272 command_runner.go:130] > # IP address on which the stream server will listen.
	I0417 19:24:51.041980  112272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0417 19:24:51.041991  112272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0417 19:24:51.042011  112272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0417 19:24:51.042018  112272 command_runner.go:130] > # stream_port = "0"
	I0417 19:24:51.042026  112272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0417 19:24:51.042032  112272 command_runner.go:130] > # stream_enable_tls = false
	I0417 19:24:51.042039  112272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0417 19:24:51.042044  112272 command_runner.go:130] > # stream_idle_timeout = ""
	I0417 19:24:51.042050  112272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0417 19:24:51.042056  112272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0417 19:24:51.042059  112272 command_runner.go:130] > # minutes.
	I0417 19:24:51.042063  112272 command_runner.go:130] > # stream_tls_cert = ""
	I0417 19:24:51.042069  112272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0417 19:24:51.042077  112272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0417 19:24:51.042081  112272 command_runner.go:130] > # stream_tls_key = ""
	I0417 19:24:51.042090  112272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0417 19:24:51.042095  112272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0417 19:24:51.042116  112272 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0417 19:24:51.042127  112272 command_runner.go:130] > # stream_tls_ca = ""
	I0417 19:24:51.042139  112272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0417 19:24:51.042150  112272 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0417 19:24:51.042169  112272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0417 19:24:51.042180  112272 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0417 19:24:51.042189  112272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0417 19:24:51.042200  112272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0417 19:24:51.042208  112272 command_runner.go:130] > [crio.runtime]
	I0417 19:24:51.042218  112272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0417 19:24:51.042230  112272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0417 19:24:51.042240  112272 command_runner.go:130] > # "nofile=1024:2048"
	I0417 19:24:51.042250  112272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0417 19:24:51.042259  112272 command_runner.go:130] > # default_ulimits = [
	I0417 19:24:51.042264  112272 command_runner.go:130] > # ]
	I0417 19:24:51.042270  112272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0417 19:24:51.042275  112272 command_runner.go:130] > # no_pivot = false
	I0417 19:24:51.042280  112272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0417 19:24:51.042289  112272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0417 19:24:51.042293  112272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0417 19:24:51.042303  112272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0417 19:24:51.042315  112272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0417 19:24:51.042325  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0417 19:24:51.042329  112272 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0417 19:24:51.042336  112272 command_runner.go:130] > # Cgroup setting for conmon
	I0417 19:24:51.042342  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0417 19:24:51.042348  112272 command_runner.go:130] > conmon_cgroup = "pod"
	I0417 19:24:51.042355  112272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0417 19:24:51.042362  112272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0417 19:24:51.042368  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0417 19:24:51.042374  112272 command_runner.go:130] > conmon_env = [
	I0417 19:24:51.042383  112272 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0417 19:24:51.042392  112272 command_runner.go:130] > ]
	I0417 19:24:51.042400  112272 command_runner.go:130] > # Additional environment variables to set for all the
	I0417 19:24:51.042412  112272 command_runner.go:130] > # containers. These are overridden if set in the
	I0417 19:24:51.042424  112272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0417 19:24:51.042434  112272 command_runner.go:130] > # default_env = [
	I0417 19:24:51.042440  112272 command_runner.go:130] > # ]
	I0417 19:24:51.042452  112272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0417 19:24:51.042466  112272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0417 19:24:51.042476  112272 command_runner.go:130] > # selinux = false
	I0417 19:24:51.042487  112272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0417 19:24:51.042500  112272 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0417 19:24:51.042512  112272 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0417 19:24:51.042522  112272 command_runner.go:130] > # seccomp_profile = ""
	I0417 19:24:51.042529  112272 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0417 19:24:51.042542  112272 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0417 19:24:51.042555  112272 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0417 19:24:51.042566  112272 command_runner.go:130] > # which might increase security.
	I0417 19:24:51.042580  112272 command_runner.go:130] > # This option is currently deprecated,
	I0417 19:24:51.042596  112272 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0417 19:24:51.042607  112272 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0417 19:24:51.042621  112272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0417 19:24:51.042636  112272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0417 19:24:51.042648  112272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0417 19:24:51.042662  112272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0417 19:24:51.042683  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.042701  112272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0417 19:24:51.042714  112272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0417 19:24:51.042724  112272 command_runner.go:130] > # the cgroup blockio controller.
	I0417 19:24:51.042729  112272 command_runner.go:130] > # blockio_config_file = ""
	I0417 19:24:51.042744  112272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0417 19:24:51.042754  112272 command_runner.go:130] > # blockio parameters.
	I0417 19:24:51.042759  112272 command_runner.go:130] > # blockio_reload = false
	I0417 19:24:51.042769  112272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0417 19:24:51.042779  112272 command_runner.go:130] > # irqbalance daemon.
	I0417 19:24:51.042789  112272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0417 19:24:51.042803  112272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0417 19:24:51.042818  112272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0417 19:24:51.042832  112272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0417 19:24:51.042845  112272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0417 19:24:51.042862  112272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0417 19:24:51.042876  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.042887  112272 command_runner.go:130] > # rdt_config_file = ""
	I0417 19:24:51.042898  112272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0417 19:24:51.042908  112272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0417 19:24:51.042947  112272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0417 19:24:51.042957  112272 command_runner.go:130] > # separate_pull_cgroup = ""
	I0417 19:24:51.042967  112272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0417 19:24:51.042980  112272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0417 19:24:51.042985  112272 command_runner.go:130] > # will be added.
	I0417 19:24:51.042991  112272 command_runner.go:130] > # default_capabilities = [
	I0417 19:24:51.043000  112272 command_runner.go:130] > # 	"CHOWN",
	I0417 19:24:51.043006  112272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0417 19:24:51.043016  112272 command_runner.go:130] > # 	"FSETID",
	I0417 19:24:51.043023  112272 command_runner.go:130] > # 	"FOWNER",
	I0417 19:24:51.043032  112272 command_runner.go:130] > # 	"SETGID",
	I0417 19:24:51.043038  112272 command_runner.go:130] > # 	"SETUID",
	I0417 19:24:51.043047  112272 command_runner.go:130] > # 	"SETPCAP",
	I0417 19:24:51.043054  112272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0417 19:24:51.043062  112272 command_runner.go:130] > # 	"KILL",
	I0417 19:24:51.043068  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043080  112272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0417 19:24:51.043097  112272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0417 19:24:51.043107  112272 command_runner.go:130] > # add_inheritable_capabilities = false
	I0417 19:24:51.043117  112272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0417 19:24:51.043130  112272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0417 19:24:51.043146  112272 command_runner.go:130] > default_sysctls = [
	I0417 19:24:51.043160  112272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0417 19:24:51.043167  112272 command_runner.go:130] > ]
	I0417 19:24:51.043175  112272 command_runner.go:130] > # List of devices on the host that a
	I0417 19:24:51.043185  112272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0417 19:24:51.043189  112272 command_runner.go:130] > # allowed_devices = [
	I0417 19:24:51.043193  112272 command_runner.go:130] > # 	"/dev/fuse",
	I0417 19:24:51.043197  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043203  112272 command_runner.go:130] > # List of additional devices. specified as
	I0417 19:24:51.043217  112272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0417 19:24:51.043229  112272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0417 19:24:51.043239  112272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0417 19:24:51.043249  112272 command_runner.go:130] > # additional_devices = [
	I0417 19:24:51.043255  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043266  112272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0417 19:24:51.043276  112272 command_runner.go:130] > # cdi_spec_dirs = [
	I0417 19:24:51.043282  112272 command_runner.go:130] > # 	"/etc/cdi",
	I0417 19:24:51.043291  112272 command_runner.go:130] > # 	"/var/run/cdi",
	I0417 19:24:51.043296  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043310  112272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0417 19:24:51.043322  112272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0417 19:24:51.043332  112272 command_runner.go:130] > # Defaults to false.
	I0417 19:24:51.043340  112272 command_runner.go:130] > # device_ownership_from_security_context = false
	I0417 19:24:51.043353  112272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0417 19:24:51.043365  112272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0417 19:24:51.043374  112272 command_runner.go:130] > # hooks_dir = [
	I0417 19:24:51.043381  112272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0417 19:24:51.043388  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043394  112272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0417 19:24:51.043401  112272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0417 19:24:51.043408  112272 command_runner.go:130] > # its default mounts from the following two files:
	I0417 19:24:51.043416  112272 command_runner.go:130] > #
	I0417 19:24:51.043430  112272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0417 19:24:51.043439  112272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0417 19:24:51.043444  112272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0417 19:24:51.043449  112272 command_runner.go:130] > #
	I0417 19:24:51.043454  112272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0417 19:24:51.043464  112272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0417 19:24:51.043469  112272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0417 19:24:51.043475  112272 command_runner.go:130] > #      only add mounts it finds in this file.
	I0417 19:24:51.043478  112272 command_runner.go:130] > #
	I0417 19:24:51.043481  112272 command_runner.go:130] > # default_mounts_file = ""
	I0417 19:24:51.043489  112272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0417 19:24:51.043503  112272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0417 19:24:51.043509  112272 command_runner.go:130] > pids_limit = 1024
	I0417 19:24:51.043522  112272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0417 19:24:51.043534  112272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0417 19:24:51.043548  112272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0417 19:24:51.043563  112272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0417 19:24:51.043596  112272 command_runner.go:130] > # log_size_max = -1
	I0417 19:24:51.043611  112272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0417 19:24:51.043620  112272 command_runner.go:130] > # log_to_journald = false
	I0417 19:24:51.043629  112272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0417 19:24:51.043641  112272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0417 19:24:51.043652  112272 command_runner.go:130] > # Path to directory for container attach sockets.
	I0417 19:24:51.043663  112272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0417 19:24:51.043671  112272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0417 19:24:51.043681  112272 command_runner.go:130] > # bind_mount_prefix = ""
	I0417 19:24:51.043688  112272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0417 19:24:51.043701  112272 command_runner.go:130] > # read_only = false
	I0417 19:24:51.043713  112272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0417 19:24:51.043727  112272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0417 19:24:51.043737  112272 command_runner.go:130] > # live configuration reload.
	I0417 19:24:51.043744  112272 command_runner.go:130] > # log_level = "info"
	I0417 19:24:51.043755  112272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0417 19:24:51.043766  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.043775  112272 command_runner.go:130] > # log_filter = ""
	I0417 19:24:51.043785  112272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0417 19:24:51.043805  112272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0417 19:24:51.043815  112272 command_runner.go:130] > # separated by comma.
	I0417 19:24:51.043828  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043838  112272 command_runner.go:130] > # uid_mappings = ""
	I0417 19:24:51.043846  112272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0417 19:24:51.043854  112272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0417 19:24:51.043860  112272 command_runner.go:130] > # separated by comma.
	I0417 19:24:51.043872  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043882  112272 command_runner.go:130] > # gid_mappings = ""
	I0417 19:24:51.043892  112272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0417 19:24:51.043905  112272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0417 19:24:51.043915  112272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0417 19:24:51.043933  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043949  112272 command_runner.go:130] > # minimum_mappable_uid = -1
	I0417 19:24:51.043959  112272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0417 19:24:51.043966  112272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0417 19:24:51.043978  112272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0417 19:24:51.043991  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.044001  112272 command_runner.go:130] > # minimum_mappable_gid = -1
	I0417 19:24:51.044010  112272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0417 19:24:51.044023  112272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0417 19:24:51.044032  112272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0417 19:24:51.044040  112272 command_runner.go:130] > # ctr_stop_timeout = 30
	I0417 19:24:51.044046  112272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0417 19:24:51.044055  112272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0417 19:24:51.044063  112272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0417 19:24:51.044074  112272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0417 19:24:51.044081  112272 command_runner.go:130] > drop_infra_ctr = false
	I0417 19:24:51.044095  112272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0417 19:24:51.044107  112272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0417 19:24:51.044118  112272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0417 19:24:51.044127  112272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0417 19:24:51.044139  112272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0417 19:24:51.044149  112272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0417 19:24:51.044154  112272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0417 19:24:51.044166  112272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0417 19:24:51.044183  112272 command_runner.go:130] > # shared_cpuset = ""
	I0417 19:24:51.044196  112272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0417 19:24:51.044211  112272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0417 19:24:51.044221  112272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0417 19:24:51.044231  112272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0417 19:24:51.044241  112272 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0417 19:24:51.044248  112272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0417 19:24:51.044258  112272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0417 19:24:51.044265  112272 command_runner.go:130] > # enable_criu_support = false
	I0417 19:24:51.044276  112272 command_runner.go:130] > # Enable/disable the generation of the container,
	I0417 19:24:51.044285  112272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0417 19:24:51.044296  112272 command_runner.go:130] > # enable_pod_events = false
	I0417 19:24:51.044310  112272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0417 19:24:51.044323  112272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0417 19:24:51.044334  112272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0417 19:24:51.044340  112272 command_runner.go:130] > # default_runtime = "runc"
	I0417 19:24:51.044348  112272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0417 19:24:51.044358  112272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0417 19:24:51.044375  112272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0417 19:24:51.044386  112272 command_runner.go:130] > # creation as a file is not desired either.
	I0417 19:24:51.044403  112272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0417 19:24:51.044413  112272 command_runner.go:130] > # the hostname is being managed dynamically.
	I0417 19:24:51.044421  112272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0417 19:24:51.044429  112272 command_runner.go:130] > # ]
	I0417 19:24:51.044435  112272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0417 19:24:51.044447  112272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0417 19:24:51.044460  112272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0417 19:24:51.044472  112272 command_runner.go:130] > # Each entry in the table should follow the format:
	I0417 19:24:51.044481  112272 command_runner.go:130] > #
	I0417 19:24:51.044491  112272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0417 19:24:51.044502  112272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0417 19:24:51.044564  112272 command_runner.go:130] > # runtime_type = "oci"
	I0417 19:24:51.044582  112272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0417 19:24:51.044594  112272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0417 19:24:51.044606  112272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0417 19:24:51.044616  112272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0417 19:24:51.044634  112272 command_runner.go:130] > # monitor_env = []
	I0417 19:24:51.044645  112272 command_runner.go:130] > # privileged_without_host_devices = false
	I0417 19:24:51.044652  112272 command_runner.go:130] > # allowed_annotations = []
	I0417 19:24:51.044658  112272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0417 19:24:51.044667  112272 command_runner.go:130] > # Where:
	I0417 19:24:51.044678  112272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0417 19:24:51.044691  112272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0417 19:24:51.044704  112272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0417 19:24:51.044719  112272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0417 19:24:51.044726  112272 command_runner.go:130] > #   in $PATH.
	I0417 19:24:51.044735  112272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0417 19:24:51.044745  112272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0417 19:24:51.044754  112272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0417 19:24:51.044765  112272 command_runner.go:130] > #   state.
	I0417 19:24:51.044790  112272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0417 19:24:51.044803  112272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0417 19:24:51.044816  112272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0417 19:24:51.044829  112272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0417 19:24:51.044842  112272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0417 19:24:51.044854  112272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0417 19:24:51.044865  112272 command_runner.go:130] > #   The currently recognized values are:
	I0417 19:24:51.044877  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0417 19:24:51.044888  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0417 19:24:51.044896  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0417 19:24:51.044904  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0417 19:24:51.044913  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0417 19:24:51.044922  112272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0417 19:24:51.044930  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0417 19:24:51.044938  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0417 19:24:51.044944  112272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0417 19:24:51.044952  112272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0417 19:24:51.044957  112272 command_runner.go:130] > #   deprecated option "conmon".
	I0417 19:24:51.044964  112272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0417 19:24:51.044971  112272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0417 19:24:51.044977  112272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0417 19:24:51.044985  112272 command_runner.go:130] > #   should be moved to the container's cgroup
	I0417 19:24:51.044999  112272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0417 19:24:51.045007  112272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0417 19:24:51.045013  112272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0417 19:24:51.045021  112272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0417 19:24:51.045024  112272 command_runner.go:130] > #
	I0417 19:24:51.045028  112272 command_runner.go:130] > # Using the seccomp notifier feature:
	I0417 19:24:51.045033  112272 command_runner.go:130] > #
	I0417 19:24:51.045040  112272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0417 19:24:51.045048  112272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0417 19:24:51.045054  112272 command_runner.go:130] > #
	I0417 19:24:51.045059  112272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0417 19:24:51.045067  112272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0417 19:24:51.045072  112272 command_runner.go:130] > #
	I0417 19:24:51.045078  112272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0417 19:24:51.045083  112272 command_runner.go:130] > # feature.
	I0417 19:24:51.045086  112272 command_runner.go:130] > #
	I0417 19:24:51.045094  112272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0417 19:24:51.045102  112272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0417 19:24:51.045111  112272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0417 19:24:51.045117  112272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0417 19:24:51.045124  112272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0417 19:24:51.045129  112272 command_runner.go:130] > #
	I0417 19:24:51.045135  112272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0417 19:24:51.045141  112272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0417 19:24:51.045146  112272 command_runner.go:130] > #
	I0417 19:24:51.045151  112272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0417 19:24:51.045157  112272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0417 19:24:51.045162  112272 command_runner.go:130] > #
	I0417 19:24:51.045168  112272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0417 19:24:51.045176  112272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0417 19:24:51.045181  112272 command_runner.go:130] > # limitation.
	I0417 19:24:51.045185  112272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0417 19:24:51.045189  112272 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0417 19:24:51.045193  112272 command_runner.go:130] > runtime_type = "oci"
	I0417 19:24:51.045200  112272 command_runner.go:130] > runtime_root = "/run/runc"
	I0417 19:24:51.045210  112272 command_runner.go:130] > runtime_config_path = ""
	I0417 19:24:51.045222  112272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0417 19:24:51.045228  112272 command_runner.go:130] > monitor_cgroup = "pod"
	I0417 19:24:51.045233  112272 command_runner.go:130] > monitor_exec_cgroup = ""
	I0417 19:24:51.045239  112272 command_runner.go:130] > monitor_env = [
	I0417 19:24:51.045244  112272 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0417 19:24:51.045249  112272 command_runner.go:130] > ]
	I0417 19:24:51.045254  112272 command_runner.go:130] > privileged_without_host_devices = false
	I0417 19:24:51.045261  112272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0417 19:24:51.045267  112272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0417 19:24:51.045275  112272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0417 19:24:51.045282  112272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0417 19:24:51.045292  112272 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0417 19:24:51.045299  112272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0417 19:24:51.045310  112272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0417 19:24:51.045319  112272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0417 19:24:51.045325  112272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0417 19:24:51.045333  112272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0417 19:24:51.045337  112272 command_runner.go:130] > # Example:
	I0417 19:24:51.045341  112272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0417 19:24:51.045349  112272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0417 19:24:51.045353  112272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0417 19:24:51.045360  112272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0417 19:24:51.045364  112272 command_runner.go:130] > # cpuset = 0
	I0417 19:24:51.045368  112272 command_runner.go:130] > # cpushares = "0-1"
	I0417 19:24:51.045371  112272 command_runner.go:130] > # Where:
	I0417 19:24:51.045375  112272 command_runner.go:130] > # The workload name is workload-type.
	I0417 19:24:51.045384  112272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0417 19:24:51.045390  112272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0417 19:24:51.045397  112272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0417 19:24:51.045404  112272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0417 19:24:51.045412  112272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0417 19:24:51.045419  112272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0417 19:24:51.045426  112272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0417 19:24:51.045431  112272 command_runner.go:130] > # Default value is set to true
	I0417 19:24:51.045437  112272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0417 19:24:51.045443  112272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0417 19:24:51.045454  112272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0417 19:24:51.045460  112272 command_runner.go:130] > # Default value is set to 'false'
	I0417 19:24:51.045465  112272 command_runner.go:130] > # disable_hostport_mapping = false
	I0417 19:24:51.045474  112272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0417 19:24:51.045484  112272 command_runner.go:130] > #
	I0417 19:24:51.045489  112272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0417 19:24:51.045495  112272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0417 19:24:51.045500  112272 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0417 19:24:51.045505  112272 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0417 19:24:51.045511  112272 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0417 19:24:51.045514  112272 command_runner.go:130] > [crio.image]
	I0417 19:24:51.045519  112272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0417 19:24:51.045523  112272 command_runner.go:130] > # default_transport = "docker://"
	I0417 19:24:51.045529  112272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0417 19:24:51.045534  112272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0417 19:24:51.045537  112272 command_runner.go:130] > # global_auth_file = ""
	I0417 19:24:51.045542  112272 command_runner.go:130] > # The image used to instantiate infra containers.
	I0417 19:24:51.045546  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.045551  112272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0417 19:24:51.045557  112272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0417 19:24:51.045561  112272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0417 19:24:51.045566  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.045576  112272 command_runner.go:130] > # pause_image_auth_file = ""
	I0417 19:24:51.045581  112272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0417 19:24:51.045587  112272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0417 19:24:51.045593  112272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0417 19:24:51.045598  112272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0417 19:24:51.045601  112272 command_runner.go:130] > # pause_command = "/pause"
	I0417 19:24:51.045607  112272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0417 19:24:51.045612  112272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0417 19:24:51.045617  112272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0417 19:24:51.045624  112272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0417 19:24:51.045630  112272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0417 19:24:51.045635  112272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0417 19:24:51.045638  112272 command_runner.go:130] > # pinned_images = [
	I0417 19:24:51.045641  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045651  112272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0417 19:24:51.045657  112272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0417 19:24:51.045663  112272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0417 19:24:51.045668  112272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0417 19:24:51.045675  112272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0417 19:24:51.045679  112272 command_runner.go:130] > # signature_policy = ""
	I0417 19:24:51.045684  112272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0417 19:24:51.045691  112272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0417 19:24:51.045700  112272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0417 19:24:51.045709  112272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0417 19:24:51.045717  112272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0417 19:24:51.045722  112272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0417 19:24:51.045729  112272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0417 19:24:51.045742  112272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0417 19:24:51.045747  112272 command_runner.go:130] > # changing them here.
	I0417 19:24:51.045751  112272 command_runner.go:130] > # insecure_registries = [
	I0417 19:24:51.045757  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045763  112272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0417 19:24:51.045770  112272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0417 19:24:51.045774  112272 command_runner.go:130] > # image_volumes = "mkdir"
	I0417 19:24:51.045781  112272 command_runner.go:130] > # Temporary directory to use for storing big files
	I0417 19:24:51.045785  112272 command_runner.go:130] > # big_files_temporary_dir = ""
	I0417 19:24:51.045793  112272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0417 19:24:51.045796  112272 command_runner.go:130] > # CNI plugins.
	I0417 19:24:51.045800  112272 command_runner.go:130] > [crio.network]
	I0417 19:24:51.045805  112272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0417 19:24:51.045813  112272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0417 19:24:51.045817  112272 command_runner.go:130] > # cni_default_network = ""
	I0417 19:24:51.045823  112272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0417 19:24:51.045827  112272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0417 19:24:51.045832  112272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0417 19:24:51.045838  112272 command_runner.go:130] > # plugin_dirs = [
	I0417 19:24:51.045842  112272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0417 19:24:51.045845  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045850  112272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0417 19:24:51.045857  112272 command_runner.go:130] > [crio.metrics]
	I0417 19:24:51.045867  112272 command_runner.go:130] > # Globally enable or disable metrics support.
	I0417 19:24:51.045873  112272 command_runner.go:130] > enable_metrics = true
	I0417 19:24:51.045878  112272 command_runner.go:130] > # Specify enabled metrics collectors.
	I0417 19:24:51.045882  112272 command_runner.go:130] > # Per default all metrics are enabled.
	I0417 19:24:51.045887  112272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0417 19:24:51.045895  112272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0417 19:24:51.045900  112272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0417 19:24:51.045907  112272 command_runner.go:130] > # metrics_collectors = [
	I0417 19:24:51.045910  112272 command_runner.go:130] > # 	"operations",
	I0417 19:24:51.045914  112272 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0417 19:24:51.045920  112272 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0417 19:24:51.045924  112272 command_runner.go:130] > # 	"operations_errors",
	I0417 19:24:51.045928  112272 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0417 19:24:51.045932  112272 command_runner.go:130] > # 	"image_pulls_by_name",
	I0417 19:24:51.045936  112272 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0417 19:24:51.045940  112272 command_runner.go:130] > # 	"image_pulls_failures",
	I0417 19:24:51.045949  112272 command_runner.go:130] > # 	"image_pulls_successes",
	I0417 19:24:51.045956  112272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0417 19:24:51.045965  112272 command_runner.go:130] > # 	"image_layer_reuse",
	I0417 19:24:51.045973  112272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0417 19:24:51.045981  112272 command_runner.go:130] > # 	"containers_oom_total",
	I0417 19:24:51.045987  112272 command_runner.go:130] > # 	"containers_oom",
	I0417 19:24:51.045995  112272 command_runner.go:130] > # 	"processes_defunct",
	I0417 19:24:51.046001  112272 command_runner.go:130] > # 	"operations_total",
	I0417 19:24:51.046009  112272 command_runner.go:130] > # 	"operations_latency_seconds",
	I0417 19:24:51.046019  112272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0417 19:24:51.046028  112272 command_runner.go:130] > # 	"operations_errors_total",
	I0417 19:24:51.046037  112272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0417 19:24:51.046050  112272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0417 19:24:51.046055  112272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0417 19:24:51.046059  112272 command_runner.go:130] > # 	"image_pulls_success_total",
	I0417 19:24:51.046063  112272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0417 19:24:51.046073  112272 command_runner.go:130] > # 	"containers_oom_count_total",
	I0417 19:24:51.046080  112272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0417 19:24:51.046085  112272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0417 19:24:51.046090  112272 command_runner.go:130] > # ]
	I0417 19:24:51.046101  112272 command_runner.go:130] > # The port on which the metrics server will listen.
	I0417 19:24:51.046108  112272 command_runner.go:130] > # metrics_port = 9090
	I0417 19:24:51.046113  112272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0417 19:24:51.046120  112272 command_runner.go:130] > # metrics_socket = ""
	I0417 19:24:51.046124  112272 command_runner.go:130] > # The certificate for the secure metrics server.
	I0417 19:24:51.046132  112272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0417 19:24:51.046137  112272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0417 19:24:51.046142  112272 command_runner.go:130] > # certificate on any modification event.
	I0417 19:24:51.046148  112272 command_runner.go:130] > # metrics_cert = ""
	I0417 19:24:51.046153  112272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0417 19:24:51.046160  112272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0417 19:24:51.046163  112272 command_runner.go:130] > # metrics_key = ""
	I0417 19:24:51.046169  112272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0417 19:24:51.046175  112272 command_runner.go:130] > [crio.tracing]
	I0417 19:24:51.046180  112272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0417 19:24:51.046186  112272 command_runner.go:130] > # enable_tracing = false
	I0417 19:24:51.046191  112272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0417 19:24:51.046198  112272 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0417 19:24:51.046204  112272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0417 19:24:51.046211  112272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0417 19:24:51.046215  112272 command_runner.go:130] > # CRI-O NRI configuration.
	I0417 19:24:51.046221  112272 command_runner.go:130] > [crio.nri]
	I0417 19:24:51.046225  112272 command_runner.go:130] > # Globally enable or disable NRI.
	I0417 19:24:51.046230  112272 command_runner.go:130] > # enable_nri = false
	I0417 19:24:51.046235  112272 command_runner.go:130] > # NRI socket to listen on.
	I0417 19:24:51.046244  112272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0417 19:24:51.046251  112272 command_runner.go:130] > # NRI plugin directory to use.
	I0417 19:24:51.046255  112272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0417 19:24:51.046262  112272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0417 19:24:51.046267  112272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0417 19:24:51.046274  112272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0417 19:24:51.046278  112272 command_runner.go:130] > # nri_disable_connections = false
	I0417 19:24:51.046285  112272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0417 19:24:51.046289  112272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0417 19:24:51.046296  112272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0417 19:24:51.046300  112272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0417 19:24:51.046311  112272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0417 19:24:51.046320  112272 command_runner.go:130] > [crio.stats]
	I0417 19:24:51.046326  112272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0417 19:24:51.046334  112272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0417 19:24:51.046338  112272 command_runner.go:130] > # stats_collection_period = 0
	I0417 19:24:51.046374  112272 command_runner.go:130] ! time="2024-04-17 19:24:51.012743548Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0417 19:24:51.046391  112272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0417 19:24:51.046528  112272 cni.go:84] Creating CNI manager for ""
	I0417 19:24:51.046540  112272 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0417 19:24:51.046550  112272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:24:51.046578  112272 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-990943 NodeName:multinode-990943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:24:51.046727  112272 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-990943"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:24:51.046783  112272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:24:51.057956  112272 command_runner.go:130] > kubeadm
	I0417 19:24:51.057978  112272 command_runner.go:130] > kubectl
	I0417 19:24:51.057982  112272 command_runner.go:130] > kubelet
	I0417 19:24:51.058025  112272 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:24:51.058070  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:24:51.068441  112272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0417 19:24:51.086142  112272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:24:51.103960  112272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0417 19:24:51.121894  112272 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0417 19:24:51.125971  112272 command_runner.go:130] > 192.168.39.106	control-plane.minikube.internal
	I0417 19:24:51.126043  112272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:24:51.268082  112272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:24:51.283297  112272 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943 for IP: 192.168.39.106
	I0417 19:24:51.283321  112272 certs.go:194] generating shared ca certs ...
	I0417 19:24:51.283337  112272 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:24:51.283497  112272 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:24:51.283541  112272 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:24:51.283552  112272 certs.go:256] generating profile certs ...
	I0417 19:24:51.283654  112272 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/client.key
	I0417 19:24:51.283715  112272 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key.edfc69ee
	I0417 19:24:51.283762  112272 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key
	I0417 19:24:51.283773  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 19:24:51.283797  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 19:24:51.283819  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 19:24:51.283832  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 19:24:51.283844  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 19:24:51.283857  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 19:24:51.283873  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 19:24:51.283885  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 19:24:51.283931  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:24:51.283956  112272 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:24:51.283965  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:24:51.283991  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:24:51.284017  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:24:51.284042  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:24:51.284076  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:24:51.284101  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.284115  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.284127  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.284687  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:24:51.310323  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:24:51.335738  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:24:51.359398  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:24:51.383117  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0417 19:24:51.407748  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 19:24:51.431230  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:24:51.456633  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:24:51.480980  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:24:51.506296  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:24:51.530907  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:24:51.561182  112272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:24:51.590071  112272 ssh_runner.go:195] Run: openssl version
	I0417 19:24:51.596840  112272 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0417 19:24:51.596940  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:24:51.633382  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650892  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650939  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650998  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.667892  112272 command_runner.go:130] > b5213941
	I0417 19:24:51.667998  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:24:51.679891  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:24:51.693043  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.697937  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.698267  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.698337  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.706971  112272 command_runner.go:130] > 51391683
	I0417 19:24:51.707162  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:24:51.718585  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:24:51.732855  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.737987  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.738128  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.738208  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.743961  112272 command_runner.go:130] > 3ec20f2e
	I0417 19:24:51.744292  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:24:51.754616  112272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:24:51.764217  112272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:24:51.764241  112272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0417 19:24:51.764254  112272 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0417 19:24:51.764265  112272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0417 19:24:51.764272  112272 command_runner.go:130] > Access: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764277  112272 command_runner.go:130] > Modify: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764293  112272 command_runner.go:130] > Change: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764300  112272 command_runner.go:130] >  Birth: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764669  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:24:51.777834  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.778131  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:24:51.788056  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.788655  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:24:51.794758  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.794831  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:24:51.800388  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.800709  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:24:51.806533  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.806805  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:24:51.822078  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.822169  112272 kubeadm.go:391] StartCluster: {Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:24:51.822324  112272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:24:51.822383  112272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:24:51.880257  112272 command_runner.go:130] > acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2
	I0417 19:24:51.880309  112272 command_runner.go:130] > e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af
	I0417 19:24:51.880317  112272 command_runner.go:130] > dfd12fc760187cdc26809686c35b3e4460df331d96d22aa3d8093812c833263a
	I0417 19:24:51.880327  112272 command_runner.go:130] > 783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60
	I0417 19:24:51.880335  112272 command_runner.go:130] > bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed
	I0417 19:24:51.880340  112272 command_runner.go:130] > 3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42
	I0417 19:24:51.880349  112272 command_runner.go:130] > 1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674
	I0417 19:24:51.880358  112272 command_runner.go:130] > d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df
	I0417 19:24:51.880381  112272 command_runner.go:130] > e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112
	I0417 19:24:51.881505  112272 cri.go:89] found id: "acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2"
	I0417 19:24:51.881526  112272 cri.go:89] found id: "e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af"
	I0417 19:24:51.881551  112272 cri.go:89] found id: "dfd12fc760187cdc26809686c35b3e4460df331d96d22aa3d8093812c833263a"
	I0417 19:24:51.881560  112272 cri.go:89] found id: "783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60"
	I0417 19:24:51.881562  112272 cri.go:89] found id: "bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed"
	I0417 19:24:51.881570  112272 cri.go:89] found id: "3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42"
	I0417 19:24:51.881573  112272 cri.go:89] found id: "1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674"
	I0417 19:24:51.881576  112272 cri.go:89] found id: "d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df"
	I0417 19:24:51.881578  112272 cri.go:89] found id: "e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112"
	I0417 19:24:51.881584  112272 cri.go:89] found id: ""
	I0417 19:24:51.881631  112272 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.521685421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713381978521659250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8171f5e1-ea98-444a-ba15-55db782a7da4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.522321557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=344e0474-a446-4f6d-8681-9e78b607ab9a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.522373600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=344e0474-a446-4f6d-8681-9e78b607ab9a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.522800545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=344e0474-a446-4f6d-8681-9e78b607ab9a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.571364113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1af7b71d-a094-4d54-8d89-f6aa57483967 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.571532923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1af7b71d-a094-4d54-8d89-f6aa57483967 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.573354007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3454afed-3467-4fb2-b18e-1f3ad84faec3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.574133882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713381978574107574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3454afed-3467-4fb2-b18e-1f3ad84faec3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.574951099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07ef303d-cd12-4cdc-a4b6-07a6d5226070 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.575003237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07ef303d-cd12-4cdc-a4b6-07a6d5226070 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.575345667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07ef303d-cd12-4cdc-a4b6-07a6d5226070 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.622857832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fefa562b-300c-4cb8-bc9f-4507297444ba name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.622926500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fefa562b-300c-4cb8-bc9f-4507297444ba name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.624306595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d77e38e5-64aa-4993-801e-0c32683364a2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.625249021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713381978625184565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d77e38e5-64aa-4993-801e-0c32683364a2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.625978578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79d9c3d5-1cbd-440e-abbb-9f7085b7d13b name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.626029473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79d9c3d5-1cbd-440e-abbb-9f7085b7d13b name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.626626899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79d9c3d5-1cbd-440e-abbb-9f7085b7d13b name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.670767108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba186fb3-595d-42fc-9a97-ae303278ea06 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.671004337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba186fb3-595d-42fc-9a97-ae303278ea06 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.672017760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad85c5d0-eb6c-4f29-9be5-7f888244c735 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.672400325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713381978672377229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad85c5d0-eb6c-4f29-9be5-7f888244c735 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.673155383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f59166c2-89c2-4c47-938b-abba92a1201a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.673233786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f59166c2-89c2-4c47-938b-abba92a1201a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:26:18 multinode-990943 crio[2847]: time="2024-04-17 19:26:18.673701258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f59166c2-89c2-4c47-938b-abba92a1201a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1511f418ee73e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   6cf240b67b42b       busybox-fc5497c4f-th5ps
	d3a9ac00162da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   0849988a36abb       coredns-7db6d8ff4d-dt7cs
	1ece9e23e1f0f       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      About a minute ago   Running             kube-proxy                1                   51acde14e0835       kube-proxy-ppn8d
	9955b0de2d709       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   d455bc6c91767       storage-provisioner
	15ddd574dd705       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   d12a0d734c890       kindnet-qk7wm
	5bfe669ed90c6       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      About a minute ago   Running             kube-controller-manager   1                   747f3cfb1c4fe       kube-controller-manager-multinode-990943
	f3af9849b5470       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      About a minute ago   Running             kube-scheduler            1                   5b45a72d7f3f4       kube-scheduler-multinode-990943
	1c03974da7166       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      About a minute ago   Running             kube-apiserver            1                   7ee0d50e2e7f5       kube-apiserver-multinode-990943
	08ce914dce8cc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ed2174ba39d7d       etcd-multinode-990943
	acf951a1e9803       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   0849988a36abb       coredns-7db6d8ff4d-dt7cs
	9e035a8242698       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   c845a7834fd66       busybox-fc5497c4f-th5ps
	e9e6af8d02377       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   ac6b510e10433       storage-provisioner
	783022d51342e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   3af55163936ed       kindnet-qk7wm
	bc33551c73203       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      7 minutes ago        Exited              kube-proxy                0                   0374c589fb2a9       kube-proxy-ppn8d
	3a4c87e81ce09       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      7 minutes ago        Exited              kube-apiserver            0                   36e63773a52fe       kube-apiserver-multinode-990943
	1b728d1ed6b5f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   12a91945c5f1d       etcd-multinode-990943
	d48a0f8541a47       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      7 minutes ago        Exited              kube-scheduler            0                   b902f27763cb1       kube-scheduler-multinode-990943
	e9c1a47ae3971       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      7 minutes ago        Exited              kube-controller-manager   0                   e3141b7fabb28       kube-controller-manager-multinode-990943
	
	
	==> coredns [acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39958 - 49103 "HINFO IN 1039202964132204482.5829226989441497131. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021711051s
	
	
	==> coredns [d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33875 - 22971 "HINFO IN 6065393507524619077.3259373049489553806. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020849821s
	
	
	==> describe nodes <==
	Name:               multinode-990943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-990943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=multinode-990943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_18_51_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:18:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-990943
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:26:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    multinode-990943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e71579ac79364f6aa68f763fac6105cf
	  System UUID:                e71579ac-7936-4f6a-a68f-763fac6105cf
	  Boot ID:                    872cfc14-74a2-4216-ab40-42125acfa7ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-th5ps                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 coredns-7db6d8ff4d-dt7cs                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-multinode-990943                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-qk7wm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-multinode-990943             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-multinode-990943    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-ppn8d                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-multinode-990943             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  Starting                 77s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m35s (x8 over 7m35s)  kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x8 over 7m35s)  kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x7 over 7m35s)  kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m29s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s                  kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m16s                  node-controller  Node multinode-990943 event: Registered Node multinode-990943 in Controller
	  Normal  NodeReady                7m13s                  kubelet          Node multinode-990943 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    76s                    kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 76s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                    kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     76s                    kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             76s                    kubelet          Node multinode-990943 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node multinode-990943 event: Registered Node multinode-990943 in Controller
	  Normal  NodeReady                66s                    kubelet          Node multinode-990943 status is now: NodeReady
	
	
	Name:               multinode-990943-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-990943-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=multinode-990943
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T19_25_40_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:25:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-990943-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:26:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:25:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:25:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:25:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:25:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-990943-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81f580ad8d944061925e7b538916ee5d
	  System UUID:                81f580ad-8d94-4061-925e-7b538916ee5d
	  Boot ID:                    cb8ed231-3459-483f-bf78-f2d12e140631
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qnckz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-7c6bt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m43s
	  kube-system                 kube-proxy-5v4n8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 35s                    kube-proxy       
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m43s (x2 over 6m43s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x2 over 6m43s)  kubelet          Node multinode-990943-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x2 over 6m43s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m34s                  kubelet          Node multinode-990943-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet          Node multinode-990943-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           35s                    node-controller  Node multinode-990943-m02 event: Registered Node multinode-990943-m02 in Controller
	  Normal  NodeReady                31s                    kubelet          Node multinode-990943-m02 status is now: NodeReady
	
	
	Name:               multinode-990943-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-990943-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=multinode-990943
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T19_26_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:26:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-990943-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:26:15 +0000   Wed, 17 Apr 2024 19:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:26:15 +0000   Wed, 17 Apr 2024 19:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:26:15 +0000   Wed, 17 Apr 2024 19:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:26:15 +0000   Wed, 17 Apr 2024 19:26:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-990943-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6029b679eec841efa73f3b1117a5216d
	  System UUID:                6029b679-eec8-41ef-a73f-3b1117a5216d
	  Boot ID:                    b9174185-d1a7-4672-875e-f48d8d7e8a7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-q8gbt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-58bgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m52s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m12s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet     Node multinode-990943-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet     Node multinode-990943-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m18s (x2 over 5m18s)  kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m18s (x2 over 5m18s)  kubelet     Node multinode-990943-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m18s (x2 over 5m18s)  kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m9s                   kubelet     Node multinode-990943-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-990943-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-990943-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-990943-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.068042] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061417] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198536] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.123428] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.285686] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.332376] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.056575] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.023774] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.993595] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.061005] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.075308] kauditd_printk_skb: 15 callbacks suppressed
	[Apr17 19:19] systemd-fstab-generator[1465]: Ignoring "noauto" option for root device
	[  +0.116523] kauditd_printk_skb: 21 callbacks suppressed
	[ +44.348855] kauditd_printk_skb: 84 callbacks suppressed
	[Apr17 19:24] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.149830] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.167785] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.150593] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.318781] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +1.974855] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[  +6.687137] kauditd_printk_skb: 132 callbacks suppressed
	[Apr17 19:25] systemd-fstab-generator[3816]: Ignoring "noauto" option for root device
	[  +0.091770] kauditd_printk_skb: 62 callbacks suppressed
	[  +2.947551] systemd-fstab-generator[3937]: Ignoring "noauto" option for root device
	[  +7.688524] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed] <==
	{"level":"info","ts":"2024-04-17T19:24:58.341514Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:24:58.341597Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:24:58.341924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc switched to configuration voters=(1386996336873150412)"}
	{"level":"info","ts":"2024-04-17T19:24:58.341978Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","added-peer-id":"133f99d1dc1797cc","added-peer-peer-urls":["https://192.168.39.106:2380"]}
	{"level":"info","ts":"2024-04-17T19:24:58.342105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:24:58.342129Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:24:58.349921Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:24:58.350365Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-17T19:24:58.350387Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-17T19:24:58.350982Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:24:58.351026Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:24:59.620539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.62485Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:multinode-990943 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:24:59.625019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:24:59.628536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:24:59.629434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-04-17T19:24:59.631893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:24:59.631944Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:24:59.637817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674] <==
	{"level":"info","ts":"2024-04-17T19:20:21.700388Z","caller":"traceutil/trace.go:171","msg":"trace[1059734923] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:634; }","duration":"132.414216ms","start":"2024-04-17T19:20:21.567949Z","end":"2024-04-17T19:20:21.700363Z","steps":["trace[1059734923] 'read index received'  (duration: 90.796845ms)","trace[1059734923] 'applied index is now lower than readState.Index'  (duration: 41.614603ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:20:21.700564Z","caller":"traceutil/trace.go:171","msg":"trace[1274647754] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"163.720009ms","start":"2024-04-17T19:20:21.536834Z","end":"2024-04-17T19:20:21.700554Z","steps":["trace[1274647754] 'process raft request'  (duration: 163.468653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:21.700966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.937421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:1935"}
	{"level":"info","ts":"2024-04-17T19:20:21.701067Z","caller":"traceutil/trace.go:171","msg":"trace[640670488] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:604; }","duration":"133.141383ms","start":"2024-04-17T19:20:21.567904Z","end":"2024-04-17T19:20:21.701045Z","steps":["trace[640670488] 'agreement among raft nodes before linearized reading'  (duration: 132.876675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.679691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.516444ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938274745669941087 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" mod_revision:591 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" value_size:507 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:20:27.679796Z","caller":"traceutil/trace.go:171","msg":"trace[1687982351] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"381.735594ms","start":"2024-04-17T19:20:27.298049Z","end":"2024-04-17T19:20:27.679785Z","steps":["trace[1687982351] 'read index received'  (duration: 156.957766ms)","trace[1687982351] 'applied index is now lower than readState.Index'  (duration: 224.7769ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:20:27.679868Z","caller":"traceutil/trace.go:171","msg":"trace[988375117] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"436.74226ms","start":"2024-04-17T19:20:27.243118Z","end":"2024-04-17T19:20:27.67986Z","steps":["trace[988375117] 'process raft request'  (duration: 212.150393ms)","trace[988375117] 'compare'  (duration: 223.09051ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:20:27.679957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:20:27.243101Z","time spent":"436.813845ms","remote":"127.0.0.1:50356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":568,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" mod_revision:591 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" value_size:507 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" > >"}
	{"level":"warn","ts":"2024-04-17T19:20:27.680189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.133137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:2969"}
	{"level":"info","ts":"2024-04-17T19:20:27.680233Z","caller":"traceutil/trace.go:171","msg":"trace[92731692] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:643; }","duration":"382.200643ms","start":"2024-04-17T19:20:27.298025Z","end":"2024-04-17T19:20:27.680226Z","steps":["trace[92731692] 'agreement among raft nodes before linearized reading'  (duration: 382.096064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.680262Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:20:27.298013Z","time spent":"382.244262ms","remote":"127.0.0.1:50268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2991,"request content":"key:\"/registry/minions/multinode-990943-m03\" "}
	{"level":"warn","ts":"2024-04-17T19:20:27.680389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.946961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-17T19:20:27.680722Z","caller":"traceutil/trace.go:171","msg":"trace[1409258157] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:643; }","duration":"228.028022ms","start":"2024-04-17T19:20:27.452416Z","end":"2024-04-17T19:20:27.680444Z","steps":["trace[1409258157] 'agreement among raft nodes before linearized reading'  (duration: 227.92815ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.980196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.271971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:2969"}
	{"level":"info","ts":"2024-04-17T19:20:27.980315Z","caller":"traceutil/trace.go:171","msg":"trace[1093728173] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:643; }","duration":"181.420691ms","start":"2024-04-17T19:20:27.798875Z","end":"2024-04-17T19:20:27.980296Z","steps":["trace[1093728173] 'range keys from in-memory index tree'  (duration: 181.098235ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:23:16.982885Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-17T19:23:16.983018Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-990943","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	{"level":"warn","ts":"2024-04-17T19:23:16.983219Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:16.983355Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:17.053093Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:17.053181Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-17T19:23:17.053275Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"133f99d1dc1797cc","current-leader-member-id":"133f99d1dc1797cc"}
	{"level":"info","ts":"2024-04-17T19:23:17.05572Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:23:17.055953Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:23:17.055998Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-990943","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	
	
	==> kernel <==
	 19:26:19 up 8 min,  0 users,  load average: 0.12, 0.14, 0.09
	Linux multinode-990943 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b] <==
	I0417 19:25:31.492026       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:25:41.507384       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:25:41.507431       1 main.go:227] handling current node
	I0417 19:25:41.507499       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:25:41.507508       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:25:41.507700       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:25:41.507732       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:25:51.514551       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:25:51.514571       1 main.go:227] handling current node
	I0417 19:25:51.514580       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:25:51.514584       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:25:51.514712       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:25:51.514748       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:26:01.519885       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:26:01.519929       1 main.go:227] handling current node
	I0417 19:26:01.519944       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:26:01.519952       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:26:01.520169       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:26:01.520207       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:26:11.533409       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:26:11.533514       1 main.go:227] handling current node
	I0417 19:26:11.533526       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:26:11.533532       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:26:11.533739       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:26:11.533772       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60] <==
	I0417 19:22:36.091358       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:22:46.102555       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:22:46.102599       1 main.go:227] handling current node
	I0417 19:22:46.102611       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:22:46.102618       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:22:46.102721       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:22:46.102749       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:22:56.114973       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:22:56.115052       1 main.go:227] handling current node
	I0417 19:22:56.115064       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:22:56.115070       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:22:56.115179       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:22:56.115206       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:23:06.136205       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:23:06.136286       1 main.go:227] handling current node
	I0417 19:23:06.136301       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:23:06.136309       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:23:06.136678       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:23:06.136719       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:23:16.155340       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:23:16.155395       1 main.go:227] handling current node
	I0417 19:23:16.155411       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:23:16.155419       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:23:16.156646       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:23:16.156690       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da] <==
	I0417 19:25:01.389606       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:25:01.465185       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:25:01.465287       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0417 19:25:01.465384       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:25:01.465771       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:25:01.466234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:25:01.476821       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:25:01.476970       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:25:01.477000       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:25:01.477007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:25:01.477011       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:25:01.467542       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 19:25:01.478866       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:25:01.479633       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:25:01.479675       1 policy_source.go:224] refreshing policies
	I0417 19:25:01.493554       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:25:01.515346       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 19:25:02.362930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0417 19:25:04.182361       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 19:25:04.303223       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 19:25:04.316821       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 19:25:04.395018       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:25:04.407222       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 19:25:13.917585       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:25:13.935778       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42] <==
	E0417 19:23:17.002064       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.003310       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005375       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005538       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005611       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005666       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.006351       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.000191       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009168       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009668       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009747       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009802       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009921       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.010128       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.010219       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.011654       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012158       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012568       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012748       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc0043b9f60)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012990       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013105       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013142       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013903       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0417 19:23:17.014931       1 controller.go:157] Shutting down quota evaluator
	I0417 19:23:17.015045       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121] <==
	I0417 19:25:14.557934       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.303795ms"
	I0417 19:25:14.559932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="497.2µs"
	I0417 19:25:35.204335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.219255ms"
	I0417 19:25:35.204640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.974µs"
	I0417 19:25:35.214011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.067301ms"
	I0417 19:25:35.214149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.612µs"
	I0417 19:25:35.681730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.468µs"
	I0417 19:25:39.464677       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m02\" does not exist"
	I0417 19:25:39.482048       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m02" podCIDRs=["10.244.1.0/24"]
	I0417 19:25:41.355978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.668µs"
	I0417 19:25:41.408653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.34µs"
	I0417 19:25:41.424735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.522µs"
	I0417 19:25:41.440936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.082µs"
	I0417 19:25:41.449807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.733µs"
	I0417 19:25:41.451627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.139µs"
	I0417 19:25:48.167412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:25:48.187670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.556µs"
	I0417 19:25:48.209114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.584µs"
	I0417 19:25:50.553262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.555615ms"
	I0417 19:25:50.553523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.615µs"
	I0417 19:26:06.382889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:26:07.377843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:26:07.378950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:26:07.409819       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.2.0/24"]
	I0417 19:26:15.664997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	
	
	==> kube-controller-manager [e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112] <==
	I0417 19:19:36.715915       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m02\" does not exist"
	I0417 19:19:36.729103       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m02" podCIDRs=["10.244.1.0/24"]
	I0417 19:19:38.198075       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-990943-m02"
	I0417 19:19:45.835979       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:19:48.030182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.943376ms"
	I0417 19:19:48.050904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.585021ms"
	I0417 19:19:48.051257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.484µs"
	I0417 19:19:48.056610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.168µs"
	I0417 19:19:50.587139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.534434ms"
	I0417 19:19:50.587611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.241µs"
	I0417 19:19:50.848210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.315899ms"
	I0417 19:19:50.848320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.585µs"
	I0417 19:20:21.703827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:20:21.704209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:20:21.715998       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.2.0/24"]
	I0417 19:20:23.219694       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-990943-m03"
	I0417 19:20:30.446215       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:00.640619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:01.783651       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:21:01.783846       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:01.794714       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.3.0/24"]
	I0417 19:21:10.402512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:48.274066       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m03"
	I0417 19:21:48.341627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.956152ms"
	I0417 19:21:48.341837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.139µs"
	
	
	==> kube-proxy [1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659] <==
	I0417 19:25:00.156073       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:25:01.481345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I0417 19:25:01.539865       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:25:01.539989       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:25:01.540008       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:25:01.542804       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:25:01.543105       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:25:01.543320       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:25:01.544834       1 config.go:192] "Starting service config controller"
	I0417 19:25:01.544884       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:25:01.544921       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:25:01.544945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:25:01.545566       1 config.go:319] "Starting node config controller"
	I0417 19:25:01.547592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:25:01.645060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:25:01.645132       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:25:01.647968       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed] <==
	I0417 19:19:05.093788       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:19:05.102114       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I0417 19:19:05.146066       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:19:05.146093       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:19:05.146107       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:19:05.148766       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:19:05.149025       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:19:05.149240       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:19:05.150298       1 config.go:192] "Starting service config controller"
	I0417 19:19:05.150342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:19:05.150382       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:19:05.150398       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:19:05.150991       1 config.go:319] "Starting node config controller"
	I0417 19:19:05.151029       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:19:05.251235       1 shared_informer.go:320] Caches are synced for node config
	I0417 19:19:05.251326       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:19:05.251336       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df] <==
	E0417 19:18:47.994937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0417 19:18:47.995016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:47.995104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:18:48.805428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 19:18:48.805505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 19:18:49.028080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:49.028234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:18:49.049163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:18:49.049596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:18:49.055541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:18:49.055642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:18:49.058546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 19:18:49.059111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 19:18:49.061355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:18:49.061426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 19:18:49.104663       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:18:49.104771       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:18:49.139342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0417 19:18:49.139443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0417 19:18:49.197715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:18:49.197800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:18:49.286419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:49.286523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0417 19:18:51.684039       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0417 19:23:16.992110       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815] <==
	I0417 19:24:59.408352       1 serving.go:380] Generated self-signed cert in-memory
	W0417 19:25:01.384927       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0417 19:25:01.384970       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:25:01.384982       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0417 19:25:01.384987       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0417 19:25:01.425007       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0417 19:25:01.425048       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:25:01.428946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0417 19:25:01.429078       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0417 19:25:01.429114       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:25:01.429130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:25:01.529743       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:25:03 multinode-990943 kubelet[3823]: I0417 19:25:03.966836    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2d754d00c72a58fa0595b1a7dca2e8d-k8s-certs\") pod \"kube-controller-manager-multinode-990943\" (UID: \"b2d754d00c72a58fa0595b1a7dca2e8d\") " pod="kube-system/kube-controller-manager-multinode-990943"
	Apr 17 19:25:03 multinode-990943 kubelet[3823]: I0417 19:25:03.966852    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2d754d00c72a58fa0595b1a7dca2e8d-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-990943\" (UID: \"b2d754d00c72a58fa0595b1a7dca2e8d\") " pod="kube-system/kube-controller-manager-multinode-990943"
	Apr 17 19:25:03 multinode-990943 kubelet[3823]: I0417 19:25:03.966867    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29306408692bf92f3a30b7b7a05afb2c-kubeconfig\") pod \"kube-scheduler-multinode-990943\" (UID: \"29306408692bf92f3a30b7b7a05afb2c\") " pod="kube-system/kube-scheduler-multinode-990943"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.491542    3823 apiserver.go:52] "Watching apiserver"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.494345    3823 topology_manager.go:215] "Topology Admit Handler" podUID="b8349fcd-3024-4211-a9c3-4547c8f67778" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dt7cs"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.495626    3823 topology_manager.go:215] "Topology Admit Handler" podUID="431b5f9e-8334-49b7-a686-4883f93e09cd" podNamespace="kube-system" podName="kube-proxy-ppn8d"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.495788    3823 topology_manager.go:215] "Topology Admit Handler" podUID="ebc8028e-ef63-42b6-aeaf-fa45a37945a4" podNamespace="kube-system" podName="kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.496105    3823 topology_manager.go:215] "Topology Admit Handler" podUID="0788ef97-1d5f-4ec6-9194-2fc80bba71a0" podNamespace="kube-system" podName="storage-provisioner"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.496424    3823 topology_manager.go:215] "Topology Admit Handler" podUID="f81066d9-6c6e-44ca-9c5c-3acfaa971eca" podNamespace="default" podName="busybox-fc5497c4f-th5ps"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.537342    3823 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572055    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/431b5f9e-8334-49b7-a686-4883f93e09cd-lib-modules\") pod \"kube-proxy-ppn8d\" (UID: \"431b5f9e-8334-49b7-a686-4883f93e09cd\") " pod="kube-system/kube-proxy-ppn8d"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572146    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/431b5f9e-8334-49b7-a686-4883f93e09cd-xtables-lock\") pod \"kube-proxy-ppn8d\" (UID: \"431b5f9e-8334-49b7-a686-4883f93e09cd\") " pod="kube-system/kube-proxy-ppn8d"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572167    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-xtables-lock\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572181    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-lib-modules\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572206    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0788ef97-1d5f-4ec6-9194-2fc80bba71a0-tmp\") pod \"storage-provisioner\" (UID: \"0788ef97-1d5f-4ec6-9194-2fc80bba71a0\") " pod="kube-system/storage-provisioner"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572241    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-cni-cfg\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: E0417 19:25:04.785706    3823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-990943\" already exists" pod="kube-system/kube-apiserver-multinode-990943"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: E0417 19:25:04.793218    3823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-990943\" already exists" pod="kube-system/kube-controller-manager-multinode-990943"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.796067    3823 scope.go:117] "RemoveContainer" containerID="acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2"
	Apr 17 19:25:14 multinode-990943 kubelet[3823]: I0417 19:25:14.516183    3823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 17 19:26:03 multinode-990943 kubelet[3823]: E0417 19:26:03.614855    3823 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:26:18.217914  113132 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18665-75973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-990943 -n multinode-990943
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-990943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (306.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 stop
E0417 19:28:19.320646   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990943 stop: exit status 82 (2m0.486853699s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-990943-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-990943 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990943 status: exit status 3 (18.853918994s)

                                                
                                                
-- stdout --
	multinode-990943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-990943-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:28:41.805154  113703 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0417 19:28:41.805195  113703 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-990943 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-990943 -n multinode-990943
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-990943 logs -n 25: (1.552475184s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943:/home/docker/cp-test_multinode-990943-m02_multinode-990943.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943 sudo cat                                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m02_multinode-990943.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03:/home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943-m03 sudo cat                                   | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp testdata/cp-test.txt                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943:/home/docker/cp-test_multinode-990943-m03_multinode-990943.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943 sudo cat                                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02:/home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943-m02 sudo cat                                   | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-990943 node stop m03                                                          | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	| node    | multinode-990943 node start                                                             | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:21 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| stop    | -p multinode-990943                                                                     | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| start   | -p multinode-990943                                                                     | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:23 UTC | 17 Apr 24 19:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC |                     |
	| node    | multinode-990943 node delete                                                            | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC | 17 Apr 24 19:26 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-990943 stop                                                                   | multinode-990943 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:23:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:23:16.013049  112272 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:23:16.013309  112272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:23:16.013319  112272 out.go:304] Setting ErrFile to fd 2...
	I0417 19:23:16.013323  112272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:23:16.013486  112272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:23:16.014005  112272 out.go:298] Setting JSON to false
	I0417 19:23:16.014928  112272 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11144,"bootTime":1713370652,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:23:16.014988  112272 start.go:139] virtualization: kvm guest
	I0417 19:23:16.017329  112272 out.go:177] * [multinode-990943] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:23:16.018808  112272 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:23:16.020121  112272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:23:16.018864  112272 notify.go:220] Checking for updates...
	I0417 19:23:16.022728  112272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:23:16.024401  112272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:23:16.025936  112272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:23:16.027308  112272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:23:16.029097  112272 config.go:182] Loaded profile config "multinode-990943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:23:16.029290  112272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:23:16.029701  112272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:23:16.029785  112272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:23:16.044805  112272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I0417 19:23:16.045172  112272 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:23:16.045726  112272 main.go:141] libmachine: Using API Version  1
	I0417 19:23:16.045748  112272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:23:16.046102  112272 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:23:16.046301  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.082402  112272 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 19:23:16.083792  112272 start.go:297] selected driver: kvm2
	I0417 19:23:16.083808  112272 start.go:901] validating driver "kvm2" against &{Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:23:16.084097  112272 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:23:16.084549  112272 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:23:16.084632  112272 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:23:16.099289  112272 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:23:16.099909  112272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:23:16.100000  112272 cni.go:84] Creating CNI manager for ""
	I0417 19:23:16.100013  112272 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0417 19:23:16.100060  112272 start.go:340] cluster config:
	{Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:23:16.100188  112272 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:23:16.102002  112272 out.go:177] * Starting "multinode-990943" primary control-plane node in "multinode-990943" cluster
	I0417 19:23:16.103453  112272 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:23:16.103491  112272 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:23:16.103502  112272 cache.go:56] Caching tarball of preloaded images
	I0417 19:23:16.103568  112272 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:23:16.103578  112272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:23:16.103701  112272 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/config.json ...
	I0417 19:23:16.103898  112272 start.go:360] acquireMachinesLock for multinode-990943: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:23:16.103943  112272 start.go:364] duration metric: took 24.946µs to acquireMachinesLock for "multinode-990943"
	I0417 19:23:16.103962  112272 start.go:96] Skipping create...Using existing machine configuration
	I0417 19:23:16.103977  112272 fix.go:54] fixHost starting: 
	I0417 19:23:16.104231  112272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:23:16.104268  112272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:23:16.118542  112272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0417 19:23:16.118955  112272 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:23:16.119492  112272 main.go:141] libmachine: Using API Version  1
	I0417 19:23:16.119518  112272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:23:16.119857  112272 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:23:16.120038  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.120215  112272 main.go:141] libmachine: (multinode-990943) Calling .GetState
	I0417 19:23:16.121901  112272 fix.go:112] recreateIfNeeded on multinode-990943: state=Running err=<nil>
	W0417 19:23:16.121920  112272 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 19:23:16.123885  112272 out.go:177] * Updating the running kvm2 "multinode-990943" VM ...
	I0417 19:23:16.125142  112272 machine.go:94] provisionDockerMachine start ...
	I0417 19:23:16.125162  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:23:16.125364  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.127859  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.128287  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.128316  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.128418  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.128591  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.128747  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.128960  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.129142  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.129325  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.129337  112272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:23:16.246359  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-990943
	
	I0417 19:23:16.246396  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.246678  112272 buildroot.go:166] provisioning hostname "multinode-990943"
	I0417 19:23:16.246712  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.246960  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.249583  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.249915  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.249939  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.250136  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.250353  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.250513  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.250667  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.250846  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.251034  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.251049  112272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-990943 && echo "multinode-990943" | sudo tee /etc/hostname
	I0417 19:23:16.373825  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-990943
	
	I0417 19:23:16.373858  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.376629  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.376937  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.376968  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.377175  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.377406  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.377572  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.377720  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.377892  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.378071  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.378087  112272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-990943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-990943/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-990943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:23:16.486218  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:23:16.486247  112272 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 19:23:16.486292  112272 buildroot.go:174] setting up certificates
	I0417 19:23:16.486305  112272 provision.go:84] configureAuth start
	I0417 19:23:16.486320  112272 main.go:141] libmachine: (multinode-990943) Calling .GetMachineName
	I0417 19:23:16.486606  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:23:16.489094  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.489471  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.489493  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.489671  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.492219  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.492571  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.492599  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.492721  112272 provision.go:143] copyHostCerts
	I0417 19:23:16.492757  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:23:16.492814  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 19:23:16.492839  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:23:16.492905  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 19:23:16.492985  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:23:16.493007  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 19:23:16.493016  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:23:16.493054  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 19:23:16.493112  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:23:16.493137  112272 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 19:23:16.493146  112272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:23:16.493180  112272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 19:23:16.493249  112272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.multinode-990943 san=[127.0.0.1 192.168.39.106 localhost minikube multinode-990943]
	I0417 19:23:16.677122  112272 provision.go:177] copyRemoteCerts
	I0417 19:23:16.677181  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:23:16.677206  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.679696  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.680073  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.680101  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.680304  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.680514  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.680706  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.680878  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:23:16.763221  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0417 19:23:16.763304  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:23:16.790257  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0417 19:23:16.790332  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0417 19:23:16.818776  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0417 19:23:16.818848  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 19:23:16.844481  112272 provision.go:87] duration metric: took 358.157175ms to configureAuth
	I0417 19:23:16.844512  112272 buildroot.go:189] setting minikube options for container-runtime
	I0417 19:23:16.844723  112272 config.go:182] Loaded profile config "multinode-990943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:23:16.844834  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:23:16.847570  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.847940  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:23:16.847971  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:23:16.848216  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:23:16.848402  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.848577  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:23:16.848753  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:23:16.848947  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:23:16.849115  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:23:16.849130  112272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:24:47.770117  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:24:47.770159  112272 machine.go:97] duration metric: took 1m31.64500083s to provisionDockerMachine
	I0417 19:24:47.770172  112272 start.go:293] postStartSetup for "multinode-990943" (driver="kvm2")
	I0417 19:24:47.770188  112272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:24:47.770212  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:47.770565  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:24:47.770623  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:47.774060  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.774556  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:47.774585  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.774713  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:47.774952  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.775092  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:47.775260  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:47.860865  112272 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:24:47.865730  112272 command_runner.go:130] > NAME=Buildroot
	I0417 19:24:47.865749  112272 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0417 19:24:47.865753  112272 command_runner.go:130] > ID=buildroot
	I0417 19:24:47.865758  112272 command_runner.go:130] > VERSION_ID=2023.02.9
	I0417 19:24:47.865763  112272 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0417 19:24:47.865823  112272 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:24:47.865842  112272 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:24:47.865912  112272 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:24:47.866001  112272 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:24:47.866012  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /etc/ssl/certs/832072.pem
	I0417 19:24:47.866109  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:24:47.875968  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:24:47.901610  112272 start.go:296] duration metric: took 131.424047ms for postStartSetup
	I0417 19:24:47.901675  112272 fix.go:56] duration metric: took 1m31.797703582s for fixHost
	I0417 19:24:47.901699  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:47.904261  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.904625  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:47.904649  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:47.904851  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:47.905072  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.905259  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:47.905446  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:47.905605  112272 main.go:141] libmachine: Using SSH client type: native
	I0417 19:24:47.905812  112272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0417 19:24:47.905824  112272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 19:24:48.005916  112272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713381887.986837354
	
	I0417 19:24:48.005942  112272 fix.go:216] guest clock: 1713381887.986837354
	I0417 19:24:48.005971  112272 fix.go:229] Guest: 2024-04-17 19:24:47.986837354 +0000 UTC Remote: 2024-04-17 19:24:47.901680293 +0000 UTC m=+91.942202216 (delta=85.157061ms)
	I0417 19:24:48.006002  112272 fix.go:200] guest clock delta is within tolerance: 85.157061ms
	I0417 19:24:48.006011  112272 start.go:83] releasing machines lock for "multinode-990943", held for 1m31.902056736s
	I0417 19:24:48.006037  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.006333  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:24:48.009221  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.009608  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.009636  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.009776  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010410  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010611  112272 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:24:48.010710  112272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:24:48.010751  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:48.010807  112272 ssh_runner.go:195] Run: cat /version.json
	I0417 19:24:48.010833  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:24:48.013164  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013445  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013552  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.013592  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013711  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:48.013845  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:48.013868  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:48.013878  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:48.014028  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:24:48.014041  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:48.014232  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:48.014253  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:24:48.014388  112272 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:24:48.014508  112272 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:24:48.115278  112272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0417 19:24:48.116125  112272 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0417 19:24:48.116292  112272 ssh_runner.go:195] Run: systemctl --version
	I0417 19:24:48.122579  112272 command_runner.go:130] > systemd 252 (252)
	I0417 19:24:48.122644  112272 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0417 19:24:48.122712  112272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:24:48.290301  112272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0417 19:24:48.298812  112272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0417 19:24:48.298864  112272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:24:48.298911  112272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:24:48.309592  112272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0417 19:24:48.309617  112272 start.go:494] detecting cgroup driver to use...
	I0417 19:24:48.309699  112272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:24:48.325938  112272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:24:48.340430  112272 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:24:48.340495  112272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:24:48.354181  112272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:24:48.368319  112272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:24:48.514780  112272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:24:48.658227  112272 docker.go:233] disabling docker service ...
	I0417 19:24:48.658316  112272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:24:48.674994  112272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:24:48.689366  112272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:24:48.838279  112272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:24:48.993002  112272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:24:49.007941  112272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:24:49.027551  112272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0417 19:24:49.027612  112272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:24:49.027674  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.039251  112272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:24:49.039339  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.050742  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.061934  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.074547  112272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:24:49.087714  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.100297  112272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.112513  112272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:24:49.125177  112272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:24:49.136481  112272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0417 19:24:49.136748  112272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:24:49.148306  112272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:24:49.296369  112272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:24:50.780113  112272 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.483693227s)
	I0417 19:24:50.780166  112272 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:24:50.780227  112272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:24:50.785569  112272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0417 19:24:50.785591  112272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0417 19:24:50.785600  112272 command_runner.go:130] > Device: 0,22	Inode: 1330        Links: 1
	I0417 19:24:50.785623  112272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0417 19:24:50.785636  112272 command_runner.go:130] > Access: 2024-04-17 19:24:50.655242829 +0000
	I0417 19:24:50.785644  112272 command_runner.go:130] > Modify: 2024-04-17 19:24:50.649242696 +0000
	I0417 19:24:50.785652  112272 command_runner.go:130] > Change: 2024-04-17 19:24:50.649242696 +0000
	I0417 19:24:50.785662  112272 command_runner.go:130] >  Birth: -
	I0417 19:24:50.785754  112272 start.go:562] Will wait 60s for crictl version
	I0417 19:24:50.785812  112272 ssh_runner.go:195] Run: which crictl
	I0417 19:24:50.789820  112272 command_runner.go:130] > /usr/bin/crictl
	I0417 19:24:50.789991  112272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:24:50.835756  112272 command_runner.go:130] > Version:  0.1.0
	I0417 19:24:50.835781  112272 command_runner.go:130] > RuntimeName:  cri-o
	I0417 19:24:50.835786  112272 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0417 19:24:50.835791  112272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0417 19:24:50.837170  112272 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:24:50.837277  112272 ssh_runner.go:195] Run: crio --version
	I0417 19:24:50.873531  112272 command_runner.go:130] > crio version 1.29.1
	I0417 19:24:50.873557  112272 command_runner.go:130] > Version:        1.29.1
	I0417 19:24:50.873563  112272 command_runner.go:130] > GitCommit:      unknown
	I0417 19:24:50.873567  112272 command_runner.go:130] > GitCommitDate:  unknown
	I0417 19:24:50.873571  112272 command_runner.go:130] > GitTreeState:   clean
	I0417 19:24:50.873577  112272 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0417 19:24:50.873581  112272 command_runner.go:130] > GoVersion:      go1.21.6
	I0417 19:24:50.873585  112272 command_runner.go:130] > Compiler:       gc
	I0417 19:24:50.873590  112272 command_runner.go:130] > Platform:       linux/amd64
	I0417 19:24:50.873594  112272 command_runner.go:130] > Linkmode:       dynamic
	I0417 19:24:50.873598  112272 command_runner.go:130] > BuildTags:      
	I0417 19:24:50.873603  112272 command_runner.go:130] >   containers_image_ostree_stub
	I0417 19:24:50.873606  112272 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0417 19:24:50.873610  112272 command_runner.go:130] >   btrfs_noversion
	I0417 19:24:50.873614  112272 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0417 19:24:50.873618  112272 command_runner.go:130] >   libdm_no_deferred_remove
	I0417 19:24:50.873621  112272 command_runner.go:130] >   seccomp
	I0417 19:24:50.873624  112272 command_runner.go:130] > LDFlags:          unknown
	I0417 19:24:50.873628  112272 command_runner.go:130] > SeccompEnabled:   true
	I0417 19:24:50.873632  112272 command_runner.go:130] > AppArmorEnabled:  false
	I0417 19:24:50.873697  112272 ssh_runner.go:195] Run: crio --version
	I0417 19:24:50.902409  112272 command_runner.go:130] > crio version 1.29.1
	I0417 19:24:50.902465  112272 command_runner.go:130] > Version:        1.29.1
	I0417 19:24:50.902475  112272 command_runner.go:130] > GitCommit:      unknown
	I0417 19:24:50.902482  112272 command_runner.go:130] > GitCommitDate:  unknown
	I0417 19:24:50.902488  112272 command_runner.go:130] > GitTreeState:   clean
	I0417 19:24:50.902497  112272 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0417 19:24:50.902503  112272 command_runner.go:130] > GoVersion:      go1.21.6
	I0417 19:24:50.902509  112272 command_runner.go:130] > Compiler:       gc
	I0417 19:24:50.902513  112272 command_runner.go:130] > Platform:       linux/amd64
	I0417 19:24:50.902517  112272 command_runner.go:130] > Linkmode:       dynamic
	I0417 19:24:50.902521  112272 command_runner.go:130] > BuildTags:      
	I0417 19:24:50.902525  112272 command_runner.go:130] >   containers_image_ostree_stub
	I0417 19:24:50.902530  112272 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0417 19:24:50.902534  112272 command_runner.go:130] >   btrfs_noversion
	I0417 19:24:50.902539  112272 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0417 19:24:50.902543  112272 command_runner.go:130] >   libdm_no_deferred_remove
	I0417 19:24:50.902547  112272 command_runner.go:130] >   seccomp
	I0417 19:24:50.902551  112272 command_runner.go:130] > LDFlags:          unknown
	I0417 19:24:50.902555  112272 command_runner.go:130] > SeccompEnabled:   true
	I0417 19:24:50.902559  112272 command_runner.go:130] > AppArmorEnabled:  false
	I0417 19:24:50.908320  112272 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 19:24:50.909929  112272 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:24:50.912518  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:50.912847  112272 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:24:50.912883  112272 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:24:50.913077  112272 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:24:50.924057  112272 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0417 19:24:50.924342  112272 kubeadm.go:877] updating cluster {Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:24:50.924514  112272 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:24:50.924581  112272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:24:50.961109  112272 command_runner.go:130] > {
	I0417 19:24:50.961132  112272 command_runner.go:130] >   "images": [
	I0417 19:24:50.961137  112272 command_runner.go:130] >     {
	I0417 19:24:50.961144  112272 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0417 19:24:50.961149  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961154  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0417 19:24:50.961158  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961166  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961174  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0417 19:24:50.961181  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0417 19:24:50.961185  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961189  112272 command_runner.go:130] >       "size": "65291810",
	I0417 19:24:50.961193  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961197  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961207  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961212  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961215  112272 command_runner.go:130] >     },
	I0417 19:24:50.961219  112272 command_runner.go:130] >     {
	I0417 19:24:50.961230  112272 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0417 19:24:50.961238  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961243  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0417 19:24:50.961247  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961250  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961257  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0417 19:24:50.961264  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0417 19:24:50.961270  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961274  112272 command_runner.go:130] >       "size": "1363676",
	I0417 19:24:50.961278  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961284  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961290  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961294  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961297  112272 command_runner.go:130] >     },
	I0417 19:24:50.961301  112272 command_runner.go:130] >     {
	I0417 19:24:50.961309  112272 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0417 19:24:50.961314  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961338  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0417 19:24:50.961342  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961345  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961352  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0417 19:24:50.961362  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0417 19:24:50.961368  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961372  112272 command_runner.go:130] >       "size": "31470524",
	I0417 19:24:50.961376  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961389  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961396  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961400  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961403  112272 command_runner.go:130] >     },
	I0417 19:24:50.961406  112272 command_runner.go:130] >     {
	I0417 19:24:50.961412  112272 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0417 19:24:50.961418  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961427  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0417 19:24:50.961433  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961437  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961447  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0417 19:24:50.961463  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0417 19:24:50.961470  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961475  112272 command_runner.go:130] >       "size": "61245718",
	I0417 19:24:50.961482  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961486  112272 command_runner.go:130] >       "username": "nonroot",
	I0417 19:24:50.961490  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961495  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961500  112272 command_runner.go:130] >     },
	I0417 19:24:50.961504  112272 command_runner.go:130] >     {
	I0417 19:24:50.961512  112272 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0417 19:24:50.961516  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961522  112272 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0417 19:24:50.961525  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961529  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961537  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0417 19:24:50.961544  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0417 19:24:50.961548  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961552  112272 command_runner.go:130] >       "size": "150779692",
	I0417 19:24:50.961557  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961561  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961567  112272 command_runner.go:130] >       },
	I0417 19:24:50.961571  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961575  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961579  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961582  112272 command_runner.go:130] >     },
	I0417 19:24:50.961585  112272 command_runner.go:130] >     {
	I0417 19:24:50.961598  112272 command_runner.go:130] >       "id": "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1",
	I0417 19:24:50.961604  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961609  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0-rc.2"
	I0417 19:24:50.961615  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961619  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961627  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053",
	I0417 19:24:50.961640  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e"
	I0417 19:24:50.961646  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961650  112272 command_runner.go:130] >       "size": "117609952",
	I0417 19:24:50.961653  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961657  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961661  112272 command_runner.go:130] >       },
	I0417 19:24:50.961664  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961668  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961672  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961675  112272 command_runner.go:130] >     },
	I0417 19:24:50.961678  112272 command_runner.go:130] >     {
	I0417 19:24:50.961684  112272 command_runner.go:130] >       "id": "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b",
	I0417 19:24:50.961690  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961696  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"
	I0417 19:24:50.961701  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961705  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961731  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8",
	I0417 19:24:50.961747  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2"
	I0417 19:24:50.961750  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961754  112272 command_runner.go:130] >       "size": "112170310",
	I0417 19:24:50.961758  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961762  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961770  112272 command_runner.go:130] >       },
	I0417 19:24:50.961776  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961783  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961789  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961793  112272 command_runner.go:130] >     },
	I0417 19:24:50.961796  112272 command_runner.go:130] >     {
	I0417 19:24:50.961805  112272 command_runner.go:130] >       "id": "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e",
	I0417 19:24:50.961811  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961817  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0-rc.2"
	I0417 19:24:50.961821  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961825  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961849  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5",
	I0417 19:24:50.961858  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d"
	I0417 19:24:50.961862  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961870  112272 command_runner.go:130] >       "size": "85932953",
	I0417 19:24:50.961874  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.961877  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961881  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961884  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961887  112272 command_runner.go:130] >     },
	I0417 19:24:50.961890  112272 command_runner.go:130] >     {
	I0417 19:24:50.961896  112272 command_runner.go:130] >       "id": "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6",
	I0417 19:24:50.961900  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961904  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0-rc.2"
	I0417 19:24:50.961907  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961911  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961918  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543",
	I0417 19:24:50.961925  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216"
	I0417 19:24:50.961928  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961931  112272 command_runner.go:130] >       "size": "63026500",
	I0417 19:24:50.961935  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.961938  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.961941  112272 command_runner.go:130] >       },
	I0417 19:24:50.961944  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.961948  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.961952  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.961954  112272 command_runner.go:130] >     },
	I0417 19:24:50.961957  112272 command_runner.go:130] >     {
	I0417 19:24:50.961963  112272 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0417 19:24:50.961967  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.961971  112272 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0417 19:24:50.961974  112272 command_runner.go:130] >       ],
	I0417 19:24:50.961978  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.961984  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0417 19:24:50.961991  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0417 19:24:50.961997  112272 command_runner.go:130] >       ],
	I0417 19:24:50.962001  112272 command_runner.go:130] >       "size": "750414",
	I0417 19:24:50.962004  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.962008  112272 command_runner.go:130] >         "value": "65535"
	I0417 19:24:50.962012  112272 command_runner.go:130] >       },
	I0417 19:24:50.962021  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.962027  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.962030  112272 command_runner.go:130] >       "pinned": true
	I0417 19:24:50.962036  112272 command_runner.go:130] >     }
	I0417 19:24:50.962042  112272 command_runner.go:130] >   ]
	I0417 19:24:50.962045  112272 command_runner.go:130] > }
	I0417 19:24:50.962799  112272 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:24:50.962814  112272 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:24:50.962861  112272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:24:50.996360  112272 command_runner.go:130] > {
	I0417 19:24:50.996385  112272 command_runner.go:130] >   "images": [
	I0417 19:24:50.996390  112272 command_runner.go:130] >     {
	I0417 19:24:50.996398  112272 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0417 19:24:50.996403  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996408  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0417 19:24:50.996411  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996416  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996424  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0417 19:24:50.996430  112272 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0417 19:24:50.996434  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996438  112272 command_runner.go:130] >       "size": "65291810",
	I0417 19:24:50.996441  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996445  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996451  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996455  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996458  112272 command_runner.go:130] >     },
	I0417 19:24:50.996461  112272 command_runner.go:130] >     {
	I0417 19:24:50.996467  112272 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0417 19:24:50.996472  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996481  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0417 19:24:50.996487  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996491  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996498  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0417 19:24:50.996505  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0417 19:24:50.996510  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996514  112272 command_runner.go:130] >       "size": "1363676",
	I0417 19:24:50.996518  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996531  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996541  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996545  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996548  112272 command_runner.go:130] >     },
	I0417 19:24:50.996557  112272 command_runner.go:130] >     {
	I0417 19:24:50.996566  112272 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0417 19:24:50.996594  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996601  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0417 19:24:50.996604  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996608  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996616  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0417 19:24:50.996625  112272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0417 19:24:50.996629  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996633  112272 command_runner.go:130] >       "size": "31470524",
	I0417 19:24:50.996637  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996641  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996647  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996655  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996659  112272 command_runner.go:130] >     },
	I0417 19:24:50.996662  112272 command_runner.go:130] >     {
	I0417 19:24:50.996668  112272 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0417 19:24:50.996674  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996678  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0417 19:24:50.996682  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996686  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996693  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0417 19:24:50.996710  112272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0417 19:24:50.996717  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996721  112272 command_runner.go:130] >       "size": "61245718",
	I0417 19:24:50.996724  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.996729  112272 command_runner.go:130] >       "username": "nonroot",
	I0417 19:24:50.996735  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996739  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996742  112272 command_runner.go:130] >     },
	I0417 19:24:50.996746  112272 command_runner.go:130] >     {
	I0417 19:24:50.996751  112272 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0417 19:24:50.996761  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996778  112272 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0417 19:24:50.996782  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996786  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996801  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0417 19:24:50.996813  112272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0417 19:24:50.996821  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996828  112272 command_runner.go:130] >       "size": "150779692",
	I0417 19:24:50.996836  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.996842  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.996849  112272 command_runner.go:130] >       },
	I0417 19:24:50.996853  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996860  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996864  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996870  112272 command_runner.go:130] >     },
	I0417 19:24:50.996873  112272 command_runner.go:130] >     {
	I0417 19:24:50.996879  112272 command_runner.go:130] >       "id": "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1",
	I0417 19:24:50.996885  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996890  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0-rc.2"
	I0417 19:24:50.996896  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996900  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.996907  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053",
	I0417 19:24:50.996916  112272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e"
	I0417 19:24:50.996919  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996923  112272 command_runner.go:130] >       "size": "117609952",
	I0417 19:24:50.996928  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.996931  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.996935  112272 command_runner.go:130] >       },
	I0417 19:24:50.996938  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.996942  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.996946  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.996949  112272 command_runner.go:130] >     },
	I0417 19:24:50.996957  112272 command_runner.go:130] >     {
	I0417 19:24:50.996965  112272 command_runner.go:130] >       "id": "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b",
	I0417 19:24:50.996969  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.996977  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"
	I0417 19:24:50.996987  112272 command_runner.go:130] >       ],
	I0417 19:24:50.996997  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997005  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8",
	I0417 19:24:50.997015  112272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2"
	I0417 19:24:50.997022  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997025  112272 command_runner.go:130] >       "size": "112170310",
	I0417 19:24:50.997029  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997033  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.997037  112272 command_runner.go:130] >       },
	I0417 19:24:50.997040  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997044  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997048  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997051  112272 command_runner.go:130] >     },
	I0417 19:24:50.997055  112272 command_runner.go:130] >     {
	I0417 19:24:50.997061  112272 command_runner.go:130] >       "id": "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e",
	I0417 19:24:50.997067  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997072  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0-rc.2"
	I0417 19:24:50.997077  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997081  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997101  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5",
	I0417 19:24:50.997115  112272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d"
	I0417 19:24:50.997118  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997121  112272 command_runner.go:130] >       "size": "85932953",
	I0417 19:24:50.997125  112272 command_runner.go:130] >       "uid": null,
	I0417 19:24:50.997128  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997132  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997136  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997139  112272 command_runner.go:130] >     },
	I0417 19:24:50.997142  112272 command_runner.go:130] >     {
	I0417 19:24:50.997150  112272 command_runner.go:130] >       "id": "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6",
	I0417 19:24:50.997156  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997161  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0-rc.2"
	I0417 19:24:50.997167  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997171  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997179  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543",
	I0417 19:24:50.997186  112272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216"
	I0417 19:24:50.997197  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997203  112272 command_runner.go:130] >       "size": "63026500",
	I0417 19:24:50.997207  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997211  112272 command_runner.go:130] >         "value": "0"
	I0417 19:24:50.997214  112272 command_runner.go:130] >       },
	I0417 19:24:50.997217  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997221  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997225  112272 command_runner.go:130] >       "pinned": false
	I0417 19:24:50.997228  112272 command_runner.go:130] >     },
	I0417 19:24:50.997232  112272 command_runner.go:130] >     {
	I0417 19:24:50.997237  112272 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0417 19:24:50.997244  112272 command_runner.go:130] >       "repoTags": [
	I0417 19:24:50.997248  112272 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0417 19:24:50.997251  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997255  112272 command_runner.go:130] >       "repoDigests": [
	I0417 19:24:50.997264  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0417 19:24:50.997273  112272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0417 19:24:50.997276  112272 command_runner.go:130] >       ],
	I0417 19:24:50.997280  112272 command_runner.go:130] >       "size": "750414",
	I0417 19:24:50.997283  112272 command_runner.go:130] >       "uid": {
	I0417 19:24:50.997288  112272 command_runner.go:130] >         "value": "65535"
	I0417 19:24:50.997294  112272 command_runner.go:130] >       },
	I0417 19:24:50.997298  112272 command_runner.go:130] >       "username": "",
	I0417 19:24:50.997301  112272 command_runner.go:130] >       "spec": null,
	I0417 19:24:50.997305  112272 command_runner.go:130] >       "pinned": true
	I0417 19:24:50.997309  112272 command_runner.go:130] >     }
	I0417 19:24:50.997312  112272 command_runner.go:130] >   ]
	I0417 19:24:50.997315  112272 command_runner.go:130] > }
	I0417 19:24:50.997978  112272 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:24:50.998014  112272 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:24:50.998036  112272 kubeadm.go:928] updating node { 192.168.39.106 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:24:50.998184  112272 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-990943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:24:50.998278  112272 ssh_runner.go:195] Run: crio config
	I0417 19:24:51.041255  112272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0417 19:24:51.041289  112272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0417 19:24:51.041299  112272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0417 19:24:51.041305  112272 command_runner.go:130] > #
	I0417 19:24:51.041315  112272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0417 19:24:51.041323  112272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0417 19:24:51.041331  112272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0417 19:24:51.041351  112272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0417 19:24:51.041358  112272 command_runner.go:130] > # reload'.
	I0417 19:24:51.041376  112272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0417 19:24:51.041390  112272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0417 19:24:51.041401  112272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0417 19:24:51.041412  112272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0417 19:24:51.041429  112272 command_runner.go:130] > [crio]
	I0417 19:24:51.041440  112272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0417 19:24:51.041451  112272 command_runner.go:130] > # containers images, in this directory.
	I0417 19:24:51.041459  112272 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0417 19:24:51.041476  112272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0417 19:24:51.041487  112272 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0417 19:24:51.041501  112272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0417 19:24:51.041511  112272 command_runner.go:130] > # imagestore = ""
	I0417 19:24:51.041542  112272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0417 19:24:51.041556  112272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0417 19:24:51.041577  112272 command_runner.go:130] > storage_driver = "overlay"
	I0417 19:24:51.041590  112272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0417 19:24:51.041603  112272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0417 19:24:51.041613  112272 command_runner.go:130] > storage_option = [
	I0417 19:24:51.041625  112272 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0417 19:24:51.041633  112272 command_runner.go:130] > ]
	I0417 19:24:51.041645  112272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0417 19:24:51.041658  112272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0417 19:24:51.041667  112272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0417 19:24:51.041680  112272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0417 19:24:51.041693  112272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0417 19:24:51.041704  112272 command_runner.go:130] > # always happen on a node reboot
	I0417 19:24:51.041713  112272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0417 19:24:51.041735  112272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0417 19:24:51.041749  112272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0417 19:24:51.041760  112272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0417 19:24:51.041771  112272 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0417 19:24:51.041789  112272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0417 19:24:51.041806  112272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0417 19:24:51.041822  112272 command_runner.go:130] > # internal_wipe = true
	I0417 19:24:51.041838  112272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0417 19:24:51.041851  112272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0417 19:24:51.041861  112272 command_runner.go:130] > # internal_repair = false
	I0417 19:24:51.041877  112272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0417 19:24:51.041891  112272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0417 19:24:51.041903  112272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0417 19:24:51.041916  112272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0417 19:24:51.041929  112272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0417 19:24:51.041936  112272 command_runner.go:130] > [crio.api]
	I0417 19:24:51.041948  112272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0417 19:24:51.041959  112272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0417 19:24:51.041969  112272 command_runner.go:130] > # IP address on which the stream server will listen.
	I0417 19:24:51.041980  112272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0417 19:24:51.041991  112272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0417 19:24:51.042011  112272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0417 19:24:51.042018  112272 command_runner.go:130] > # stream_port = "0"
	I0417 19:24:51.042026  112272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0417 19:24:51.042032  112272 command_runner.go:130] > # stream_enable_tls = false
	I0417 19:24:51.042039  112272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0417 19:24:51.042044  112272 command_runner.go:130] > # stream_idle_timeout = ""
	I0417 19:24:51.042050  112272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0417 19:24:51.042056  112272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0417 19:24:51.042059  112272 command_runner.go:130] > # minutes.
	I0417 19:24:51.042063  112272 command_runner.go:130] > # stream_tls_cert = ""
	I0417 19:24:51.042069  112272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0417 19:24:51.042077  112272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0417 19:24:51.042081  112272 command_runner.go:130] > # stream_tls_key = ""
	I0417 19:24:51.042090  112272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0417 19:24:51.042095  112272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0417 19:24:51.042116  112272 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0417 19:24:51.042127  112272 command_runner.go:130] > # stream_tls_ca = ""
	I0417 19:24:51.042139  112272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0417 19:24:51.042150  112272 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0417 19:24:51.042169  112272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0417 19:24:51.042180  112272 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0417 19:24:51.042189  112272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0417 19:24:51.042200  112272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0417 19:24:51.042208  112272 command_runner.go:130] > [crio.runtime]
	I0417 19:24:51.042218  112272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0417 19:24:51.042230  112272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0417 19:24:51.042240  112272 command_runner.go:130] > # "nofile=1024:2048"
	I0417 19:24:51.042250  112272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0417 19:24:51.042259  112272 command_runner.go:130] > # default_ulimits = [
	I0417 19:24:51.042264  112272 command_runner.go:130] > # ]
	I0417 19:24:51.042270  112272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0417 19:24:51.042275  112272 command_runner.go:130] > # no_pivot = false
	I0417 19:24:51.042280  112272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0417 19:24:51.042289  112272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0417 19:24:51.042293  112272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0417 19:24:51.042303  112272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0417 19:24:51.042315  112272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0417 19:24:51.042325  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0417 19:24:51.042329  112272 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0417 19:24:51.042336  112272 command_runner.go:130] > # Cgroup setting for conmon
	I0417 19:24:51.042342  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0417 19:24:51.042348  112272 command_runner.go:130] > conmon_cgroup = "pod"
	I0417 19:24:51.042355  112272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0417 19:24:51.042362  112272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0417 19:24:51.042368  112272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0417 19:24:51.042374  112272 command_runner.go:130] > conmon_env = [
	I0417 19:24:51.042383  112272 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0417 19:24:51.042392  112272 command_runner.go:130] > ]
	I0417 19:24:51.042400  112272 command_runner.go:130] > # Additional environment variables to set for all the
	I0417 19:24:51.042412  112272 command_runner.go:130] > # containers. These are overridden if set in the
	I0417 19:24:51.042424  112272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0417 19:24:51.042434  112272 command_runner.go:130] > # default_env = [
	I0417 19:24:51.042440  112272 command_runner.go:130] > # ]
	I0417 19:24:51.042452  112272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0417 19:24:51.042466  112272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0417 19:24:51.042476  112272 command_runner.go:130] > # selinux = false
	I0417 19:24:51.042487  112272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0417 19:24:51.042500  112272 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0417 19:24:51.042512  112272 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0417 19:24:51.042522  112272 command_runner.go:130] > # seccomp_profile = ""
	I0417 19:24:51.042529  112272 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0417 19:24:51.042542  112272 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0417 19:24:51.042555  112272 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0417 19:24:51.042566  112272 command_runner.go:130] > # which might increase security.
	I0417 19:24:51.042580  112272 command_runner.go:130] > # This option is currently deprecated,
	I0417 19:24:51.042596  112272 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0417 19:24:51.042607  112272 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0417 19:24:51.042621  112272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0417 19:24:51.042636  112272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0417 19:24:51.042648  112272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0417 19:24:51.042662  112272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0417 19:24:51.042683  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.042701  112272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0417 19:24:51.042714  112272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0417 19:24:51.042724  112272 command_runner.go:130] > # the cgroup blockio controller.
	I0417 19:24:51.042729  112272 command_runner.go:130] > # blockio_config_file = ""
	I0417 19:24:51.042744  112272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0417 19:24:51.042754  112272 command_runner.go:130] > # blockio parameters.
	I0417 19:24:51.042759  112272 command_runner.go:130] > # blockio_reload = false
	I0417 19:24:51.042769  112272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0417 19:24:51.042779  112272 command_runner.go:130] > # irqbalance daemon.
	I0417 19:24:51.042789  112272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0417 19:24:51.042803  112272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0417 19:24:51.042818  112272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0417 19:24:51.042832  112272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0417 19:24:51.042845  112272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0417 19:24:51.042862  112272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0417 19:24:51.042876  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.042887  112272 command_runner.go:130] > # rdt_config_file = ""
	I0417 19:24:51.042898  112272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0417 19:24:51.042908  112272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0417 19:24:51.042947  112272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0417 19:24:51.042957  112272 command_runner.go:130] > # separate_pull_cgroup = ""
	I0417 19:24:51.042967  112272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0417 19:24:51.042980  112272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0417 19:24:51.042985  112272 command_runner.go:130] > # will be added.
	I0417 19:24:51.042991  112272 command_runner.go:130] > # default_capabilities = [
	I0417 19:24:51.043000  112272 command_runner.go:130] > # 	"CHOWN",
	I0417 19:24:51.043006  112272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0417 19:24:51.043016  112272 command_runner.go:130] > # 	"FSETID",
	I0417 19:24:51.043023  112272 command_runner.go:130] > # 	"FOWNER",
	I0417 19:24:51.043032  112272 command_runner.go:130] > # 	"SETGID",
	I0417 19:24:51.043038  112272 command_runner.go:130] > # 	"SETUID",
	I0417 19:24:51.043047  112272 command_runner.go:130] > # 	"SETPCAP",
	I0417 19:24:51.043054  112272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0417 19:24:51.043062  112272 command_runner.go:130] > # 	"KILL",
	I0417 19:24:51.043068  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043080  112272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0417 19:24:51.043097  112272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0417 19:24:51.043107  112272 command_runner.go:130] > # add_inheritable_capabilities = false
	I0417 19:24:51.043117  112272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0417 19:24:51.043130  112272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0417 19:24:51.043146  112272 command_runner.go:130] > default_sysctls = [
	I0417 19:24:51.043160  112272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0417 19:24:51.043167  112272 command_runner.go:130] > ]
	I0417 19:24:51.043175  112272 command_runner.go:130] > # List of devices on the host that a
	I0417 19:24:51.043185  112272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0417 19:24:51.043189  112272 command_runner.go:130] > # allowed_devices = [
	I0417 19:24:51.043193  112272 command_runner.go:130] > # 	"/dev/fuse",
	I0417 19:24:51.043197  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043203  112272 command_runner.go:130] > # List of additional devices. specified as
	I0417 19:24:51.043217  112272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0417 19:24:51.043229  112272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0417 19:24:51.043239  112272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0417 19:24:51.043249  112272 command_runner.go:130] > # additional_devices = [
	I0417 19:24:51.043255  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043266  112272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0417 19:24:51.043276  112272 command_runner.go:130] > # cdi_spec_dirs = [
	I0417 19:24:51.043282  112272 command_runner.go:130] > # 	"/etc/cdi",
	I0417 19:24:51.043291  112272 command_runner.go:130] > # 	"/var/run/cdi",
	I0417 19:24:51.043296  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043310  112272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0417 19:24:51.043322  112272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0417 19:24:51.043332  112272 command_runner.go:130] > # Defaults to false.
	I0417 19:24:51.043340  112272 command_runner.go:130] > # device_ownership_from_security_context = false
	I0417 19:24:51.043353  112272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0417 19:24:51.043365  112272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0417 19:24:51.043374  112272 command_runner.go:130] > # hooks_dir = [
	I0417 19:24:51.043381  112272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0417 19:24:51.043388  112272 command_runner.go:130] > # ]
	I0417 19:24:51.043394  112272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0417 19:24:51.043401  112272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0417 19:24:51.043408  112272 command_runner.go:130] > # its default mounts from the following two files:
	I0417 19:24:51.043416  112272 command_runner.go:130] > #
	I0417 19:24:51.043430  112272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0417 19:24:51.043439  112272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0417 19:24:51.043444  112272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0417 19:24:51.043449  112272 command_runner.go:130] > #
	I0417 19:24:51.043454  112272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0417 19:24:51.043464  112272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0417 19:24:51.043469  112272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0417 19:24:51.043475  112272 command_runner.go:130] > #      only add mounts it finds in this file.
	I0417 19:24:51.043478  112272 command_runner.go:130] > #
	I0417 19:24:51.043481  112272 command_runner.go:130] > # default_mounts_file = ""
	I0417 19:24:51.043489  112272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0417 19:24:51.043503  112272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0417 19:24:51.043509  112272 command_runner.go:130] > pids_limit = 1024
	I0417 19:24:51.043522  112272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0417 19:24:51.043534  112272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0417 19:24:51.043548  112272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0417 19:24:51.043563  112272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0417 19:24:51.043596  112272 command_runner.go:130] > # log_size_max = -1
	I0417 19:24:51.043611  112272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0417 19:24:51.043620  112272 command_runner.go:130] > # log_to_journald = false
	I0417 19:24:51.043629  112272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0417 19:24:51.043641  112272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0417 19:24:51.043652  112272 command_runner.go:130] > # Path to directory for container attach sockets.
	I0417 19:24:51.043663  112272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0417 19:24:51.043671  112272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0417 19:24:51.043681  112272 command_runner.go:130] > # bind_mount_prefix = ""
	I0417 19:24:51.043688  112272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0417 19:24:51.043701  112272 command_runner.go:130] > # read_only = false
	I0417 19:24:51.043713  112272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0417 19:24:51.043727  112272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0417 19:24:51.043737  112272 command_runner.go:130] > # live configuration reload.
	I0417 19:24:51.043744  112272 command_runner.go:130] > # log_level = "info"
	I0417 19:24:51.043755  112272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0417 19:24:51.043766  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.043775  112272 command_runner.go:130] > # log_filter = ""
	I0417 19:24:51.043785  112272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0417 19:24:51.043805  112272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0417 19:24:51.043815  112272 command_runner.go:130] > # separated by comma.
	I0417 19:24:51.043828  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043838  112272 command_runner.go:130] > # uid_mappings = ""
	I0417 19:24:51.043846  112272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0417 19:24:51.043854  112272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0417 19:24:51.043860  112272 command_runner.go:130] > # separated by comma.
	I0417 19:24:51.043872  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043882  112272 command_runner.go:130] > # gid_mappings = ""
	I0417 19:24:51.043892  112272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0417 19:24:51.043905  112272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0417 19:24:51.043915  112272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0417 19:24:51.043933  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.043949  112272 command_runner.go:130] > # minimum_mappable_uid = -1
	I0417 19:24:51.043959  112272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0417 19:24:51.043966  112272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0417 19:24:51.043978  112272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0417 19:24:51.043991  112272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0417 19:24:51.044001  112272 command_runner.go:130] > # minimum_mappable_gid = -1
	I0417 19:24:51.044010  112272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0417 19:24:51.044023  112272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0417 19:24:51.044032  112272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0417 19:24:51.044040  112272 command_runner.go:130] > # ctr_stop_timeout = 30
	I0417 19:24:51.044046  112272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0417 19:24:51.044055  112272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0417 19:24:51.044063  112272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0417 19:24:51.044074  112272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0417 19:24:51.044081  112272 command_runner.go:130] > drop_infra_ctr = false
	I0417 19:24:51.044095  112272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0417 19:24:51.044107  112272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0417 19:24:51.044118  112272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0417 19:24:51.044127  112272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0417 19:24:51.044139  112272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0417 19:24:51.044149  112272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0417 19:24:51.044154  112272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0417 19:24:51.044166  112272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0417 19:24:51.044183  112272 command_runner.go:130] > # shared_cpuset = ""
	I0417 19:24:51.044196  112272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0417 19:24:51.044211  112272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0417 19:24:51.044221  112272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0417 19:24:51.044231  112272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0417 19:24:51.044241  112272 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0417 19:24:51.044248  112272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0417 19:24:51.044258  112272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0417 19:24:51.044265  112272 command_runner.go:130] > # enable_criu_support = false
	I0417 19:24:51.044276  112272 command_runner.go:130] > # Enable/disable the generation of the container,
	I0417 19:24:51.044285  112272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0417 19:24:51.044296  112272 command_runner.go:130] > # enable_pod_events = false
	I0417 19:24:51.044310  112272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0417 19:24:51.044323  112272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0417 19:24:51.044334  112272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0417 19:24:51.044340  112272 command_runner.go:130] > # default_runtime = "runc"
	I0417 19:24:51.044348  112272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0417 19:24:51.044358  112272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0417 19:24:51.044375  112272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0417 19:24:51.044386  112272 command_runner.go:130] > # creation as a file is not desired either.
	I0417 19:24:51.044403  112272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0417 19:24:51.044413  112272 command_runner.go:130] > # the hostname is being managed dynamically.
	I0417 19:24:51.044421  112272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0417 19:24:51.044429  112272 command_runner.go:130] > # ]
	I0417 19:24:51.044435  112272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0417 19:24:51.044447  112272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0417 19:24:51.044460  112272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0417 19:24:51.044472  112272 command_runner.go:130] > # Each entry in the table should follow the format:
	I0417 19:24:51.044481  112272 command_runner.go:130] > #
	I0417 19:24:51.044491  112272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0417 19:24:51.044502  112272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0417 19:24:51.044564  112272 command_runner.go:130] > # runtime_type = "oci"
	I0417 19:24:51.044582  112272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0417 19:24:51.044594  112272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0417 19:24:51.044606  112272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0417 19:24:51.044616  112272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0417 19:24:51.044634  112272 command_runner.go:130] > # monitor_env = []
	I0417 19:24:51.044645  112272 command_runner.go:130] > # privileged_without_host_devices = false
	I0417 19:24:51.044652  112272 command_runner.go:130] > # allowed_annotations = []
	I0417 19:24:51.044658  112272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0417 19:24:51.044667  112272 command_runner.go:130] > # Where:
	I0417 19:24:51.044678  112272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0417 19:24:51.044691  112272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0417 19:24:51.044704  112272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0417 19:24:51.044719  112272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0417 19:24:51.044726  112272 command_runner.go:130] > #   in $PATH.
	I0417 19:24:51.044735  112272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0417 19:24:51.044745  112272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0417 19:24:51.044754  112272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0417 19:24:51.044765  112272 command_runner.go:130] > #   state.
	I0417 19:24:51.044790  112272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0417 19:24:51.044803  112272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0417 19:24:51.044816  112272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0417 19:24:51.044829  112272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0417 19:24:51.044842  112272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0417 19:24:51.044854  112272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0417 19:24:51.044865  112272 command_runner.go:130] > #   The currently recognized values are:
	I0417 19:24:51.044877  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0417 19:24:51.044888  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0417 19:24:51.044896  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0417 19:24:51.044904  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0417 19:24:51.044913  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0417 19:24:51.044922  112272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0417 19:24:51.044930  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0417 19:24:51.044938  112272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0417 19:24:51.044944  112272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0417 19:24:51.044952  112272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0417 19:24:51.044957  112272 command_runner.go:130] > #   deprecated option "conmon".
	I0417 19:24:51.044964  112272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0417 19:24:51.044971  112272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0417 19:24:51.044977  112272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0417 19:24:51.044985  112272 command_runner.go:130] > #   should be moved to the container's cgroup
	I0417 19:24:51.044999  112272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0417 19:24:51.045007  112272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0417 19:24:51.045013  112272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0417 19:24:51.045021  112272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0417 19:24:51.045024  112272 command_runner.go:130] > #
	I0417 19:24:51.045028  112272 command_runner.go:130] > # Using the seccomp notifier feature:
	I0417 19:24:51.045033  112272 command_runner.go:130] > #
	I0417 19:24:51.045040  112272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0417 19:24:51.045048  112272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0417 19:24:51.045054  112272 command_runner.go:130] > #
	I0417 19:24:51.045059  112272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0417 19:24:51.045067  112272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0417 19:24:51.045072  112272 command_runner.go:130] > #
	I0417 19:24:51.045078  112272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0417 19:24:51.045083  112272 command_runner.go:130] > # feature.
	I0417 19:24:51.045086  112272 command_runner.go:130] > #
	I0417 19:24:51.045094  112272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0417 19:24:51.045102  112272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0417 19:24:51.045111  112272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0417 19:24:51.045117  112272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0417 19:24:51.045124  112272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0417 19:24:51.045129  112272 command_runner.go:130] > #
	I0417 19:24:51.045135  112272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0417 19:24:51.045141  112272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0417 19:24:51.045146  112272 command_runner.go:130] > #
	I0417 19:24:51.045151  112272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0417 19:24:51.045157  112272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0417 19:24:51.045162  112272 command_runner.go:130] > #
	I0417 19:24:51.045168  112272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0417 19:24:51.045176  112272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0417 19:24:51.045181  112272 command_runner.go:130] > # limitation.
	I0417 19:24:51.045185  112272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0417 19:24:51.045189  112272 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0417 19:24:51.045193  112272 command_runner.go:130] > runtime_type = "oci"
	I0417 19:24:51.045200  112272 command_runner.go:130] > runtime_root = "/run/runc"
	I0417 19:24:51.045210  112272 command_runner.go:130] > runtime_config_path = ""
	I0417 19:24:51.045222  112272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0417 19:24:51.045228  112272 command_runner.go:130] > monitor_cgroup = "pod"
	I0417 19:24:51.045233  112272 command_runner.go:130] > monitor_exec_cgroup = ""
	I0417 19:24:51.045239  112272 command_runner.go:130] > monitor_env = [
	I0417 19:24:51.045244  112272 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0417 19:24:51.045249  112272 command_runner.go:130] > ]
	I0417 19:24:51.045254  112272 command_runner.go:130] > privileged_without_host_devices = false
	I0417 19:24:51.045261  112272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0417 19:24:51.045267  112272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0417 19:24:51.045275  112272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0417 19:24:51.045282  112272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0417 19:24:51.045292  112272 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0417 19:24:51.045299  112272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0417 19:24:51.045310  112272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0417 19:24:51.045319  112272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0417 19:24:51.045325  112272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0417 19:24:51.045333  112272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0417 19:24:51.045337  112272 command_runner.go:130] > # Example:
	I0417 19:24:51.045341  112272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0417 19:24:51.045349  112272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0417 19:24:51.045353  112272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0417 19:24:51.045360  112272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0417 19:24:51.045364  112272 command_runner.go:130] > # cpuset = 0
	I0417 19:24:51.045368  112272 command_runner.go:130] > # cpushares = "0-1"
	I0417 19:24:51.045371  112272 command_runner.go:130] > # Where:
	I0417 19:24:51.045375  112272 command_runner.go:130] > # The workload name is workload-type.
	I0417 19:24:51.045384  112272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0417 19:24:51.045390  112272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0417 19:24:51.045397  112272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0417 19:24:51.045404  112272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0417 19:24:51.045412  112272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0417 19:24:51.045419  112272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0417 19:24:51.045426  112272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0417 19:24:51.045431  112272 command_runner.go:130] > # Default value is set to true
	I0417 19:24:51.045437  112272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0417 19:24:51.045443  112272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0417 19:24:51.045454  112272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0417 19:24:51.045460  112272 command_runner.go:130] > # Default value is set to 'false'
	I0417 19:24:51.045465  112272 command_runner.go:130] > # disable_hostport_mapping = false
	I0417 19:24:51.045474  112272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0417 19:24:51.045484  112272 command_runner.go:130] > #
	I0417 19:24:51.045489  112272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0417 19:24:51.045495  112272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0417 19:24:51.045500  112272 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0417 19:24:51.045505  112272 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0417 19:24:51.045511  112272 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0417 19:24:51.045514  112272 command_runner.go:130] > [crio.image]
	I0417 19:24:51.045519  112272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0417 19:24:51.045523  112272 command_runner.go:130] > # default_transport = "docker://"
	I0417 19:24:51.045529  112272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0417 19:24:51.045534  112272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0417 19:24:51.045537  112272 command_runner.go:130] > # global_auth_file = ""
	I0417 19:24:51.045542  112272 command_runner.go:130] > # The image used to instantiate infra containers.
	I0417 19:24:51.045546  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.045551  112272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0417 19:24:51.045557  112272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0417 19:24:51.045561  112272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0417 19:24:51.045566  112272 command_runner.go:130] > # This option supports live configuration reload.
	I0417 19:24:51.045576  112272 command_runner.go:130] > # pause_image_auth_file = ""
	I0417 19:24:51.045581  112272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0417 19:24:51.045587  112272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0417 19:24:51.045593  112272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0417 19:24:51.045598  112272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0417 19:24:51.045601  112272 command_runner.go:130] > # pause_command = "/pause"
	I0417 19:24:51.045607  112272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0417 19:24:51.045612  112272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0417 19:24:51.045617  112272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0417 19:24:51.045624  112272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0417 19:24:51.045630  112272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0417 19:24:51.045635  112272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0417 19:24:51.045638  112272 command_runner.go:130] > # pinned_images = [
	I0417 19:24:51.045641  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045651  112272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0417 19:24:51.045657  112272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0417 19:24:51.045663  112272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0417 19:24:51.045668  112272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0417 19:24:51.045675  112272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0417 19:24:51.045679  112272 command_runner.go:130] > # signature_policy = ""
	I0417 19:24:51.045684  112272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0417 19:24:51.045691  112272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0417 19:24:51.045700  112272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0417 19:24:51.045709  112272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0417 19:24:51.045717  112272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0417 19:24:51.045722  112272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0417 19:24:51.045729  112272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0417 19:24:51.045742  112272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0417 19:24:51.045747  112272 command_runner.go:130] > # changing them here.
	I0417 19:24:51.045751  112272 command_runner.go:130] > # insecure_registries = [
	I0417 19:24:51.045757  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045763  112272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0417 19:24:51.045770  112272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0417 19:24:51.045774  112272 command_runner.go:130] > # image_volumes = "mkdir"
	I0417 19:24:51.045781  112272 command_runner.go:130] > # Temporary directory to use for storing big files
	I0417 19:24:51.045785  112272 command_runner.go:130] > # big_files_temporary_dir = ""
	I0417 19:24:51.045793  112272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0417 19:24:51.045796  112272 command_runner.go:130] > # CNI plugins.
	I0417 19:24:51.045800  112272 command_runner.go:130] > [crio.network]
	I0417 19:24:51.045805  112272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0417 19:24:51.045813  112272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0417 19:24:51.045817  112272 command_runner.go:130] > # cni_default_network = ""
	I0417 19:24:51.045823  112272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0417 19:24:51.045827  112272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0417 19:24:51.045832  112272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0417 19:24:51.045838  112272 command_runner.go:130] > # plugin_dirs = [
	I0417 19:24:51.045842  112272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0417 19:24:51.045845  112272 command_runner.go:130] > # ]
	I0417 19:24:51.045850  112272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0417 19:24:51.045857  112272 command_runner.go:130] > [crio.metrics]
	I0417 19:24:51.045867  112272 command_runner.go:130] > # Globally enable or disable metrics support.
	I0417 19:24:51.045873  112272 command_runner.go:130] > enable_metrics = true
	I0417 19:24:51.045878  112272 command_runner.go:130] > # Specify enabled metrics collectors.
	I0417 19:24:51.045882  112272 command_runner.go:130] > # Per default all metrics are enabled.
	I0417 19:24:51.045887  112272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0417 19:24:51.045895  112272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0417 19:24:51.045900  112272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0417 19:24:51.045907  112272 command_runner.go:130] > # metrics_collectors = [
	I0417 19:24:51.045910  112272 command_runner.go:130] > # 	"operations",
	I0417 19:24:51.045914  112272 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0417 19:24:51.045920  112272 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0417 19:24:51.045924  112272 command_runner.go:130] > # 	"operations_errors",
	I0417 19:24:51.045928  112272 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0417 19:24:51.045932  112272 command_runner.go:130] > # 	"image_pulls_by_name",
	I0417 19:24:51.045936  112272 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0417 19:24:51.045940  112272 command_runner.go:130] > # 	"image_pulls_failures",
	I0417 19:24:51.045949  112272 command_runner.go:130] > # 	"image_pulls_successes",
	I0417 19:24:51.045956  112272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0417 19:24:51.045965  112272 command_runner.go:130] > # 	"image_layer_reuse",
	I0417 19:24:51.045973  112272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0417 19:24:51.045981  112272 command_runner.go:130] > # 	"containers_oom_total",
	I0417 19:24:51.045987  112272 command_runner.go:130] > # 	"containers_oom",
	I0417 19:24:51.045995  112272 command_runner.go:130] > # 	"processes_defunct",
	I0417 19:24:51.046001  112272 command_runner.go:130] > # 	"operations_total",
	I0417 19:24:51.046009  112272 command_runner.go:130] > # 	"operations_latency_seconds",
	I0417 19:24:51.046019  112272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0417 19:24:51.046028  112272 command_runner.go:130] > # 	"operations_errors_total",
	I0417 19:24:51.046037  112272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0417 19:24:51.046050  112272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0417 19:24:51.046055  112272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0417 19:24:51.046059  112272 command_runner.go:130] > # 	"image_pulls_success_total",
	I0417 19:24:51.046063  112272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0417 19:24:51.046073  112272 command_runner.go:130] > # 	"containers_oom_count_total",
	I0417 19:24:51.046080  112272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0417 19:24:51.046085  112272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0417 19:24:51.046090  112272 command_runner.go:130] > # ]
	I0417 19:24:51.046101  112272 command_runner.go:130] > # The port on which the metrics server will listen.
	I0417 19:24:51.046108  112272 command_runner.go:130] > # metrics_port = 9090
	I0417 19:24:51.046113  112272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0417 19:24:51.046120  112272 command_runner.go:130] > # metrics_socket = ""
	I0417 19:24:51.046124  112272 command_runner.go:130] > # The certificate for the secure metrics server.
	I0417 19:24:51.046132  112272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0417 19:24:51.046137  112272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0417 19:24:51.046142  112272 command_runner.go:130] > # certificate on any modification event.
	I0417 19:24:51.046148  112272 command_runner.go:130] > # metrics_cert = ""
	I0417 19:24:51.046153  112272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0417 19:24:51.046160  112272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0417 19:24:51.046163  112272 command_runner.go:130] > # metrics_key = ""
	I0417 19:24:51.046169  112272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0417 19:24:51.046175  112272 command_runner.go:130] > [crio.tracing]
	I0417 19:24:51.046180  112272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0417 19:24:51.046186  112272 command_runner.go:130] > # enable_tracing = false
	I0417 19:24:51.046191  112272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0417 19:24:51.046198  112272 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0417 19:24:51.046204  112272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0417 19:24:51.046211  112272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0417 19:24:51.046215  112272 command_runner.go:130] > # CRI-O NRI configuration.
	I0417 19:24:51.046221  112272 command_runner.go:130] > [crio.nri]
	I0417 19:24:51.046225  112272 command_runner.go:130] > # Globally enable or disable NRI.
	I0417 19:24:51.046230  112272 command_runner.go:130] > # enable_nri = false
	I0417 19:24:51.046235  112272 command_runner.go:130] > # NRI socket to listen on.
	I0417 19:24:51.046244  112272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0417 19:24:51.046251  112272 command_runner.go:130] > # NRI plugin directory to use.
	I0417 19:24:51.046255  112272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0417 19:24:51.046262  112272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0417 19:24:51.046267  112272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0417 19:24:51.046274  112272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0417 19:24:51.046278  112272 command_runner.go:130] > # nri_disable_connections = false
	I0417 19:24:51.046285  112272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0417 19:24:51.046289  112272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0417 19:24:51.046296  112272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0417 19:24:51.046300  112272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0417 19:24:51.046311  112272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0417 19:24:51.046320  112272 command_runner.go:130] > [crio.stats]
	I0417 19:24:51.046326  112272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0417 19:24:51.046334  112272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0417 19:24:51.046338  112272 command_runner.go:130] > # stats_collection_period = 0
	I0417 19:24:51.046374  112272 command_runner.go:130] ! time="2024-04-17 19:24:51.012743548Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0417 19:24:51.046391  112272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0417 19:24:51.046528  112272 cni.go:84] Creating CNI manager for ""
	I0417 19:24:51.046540  112272 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0417 19:24:51.046550  112272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:24:51.046578  112272 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-990943 NodeName:multinode-990943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:24:51.046727  112272 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-990943"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:24:51.046783  112272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:24:51.057956  112272 command_runner.go:130] > kubeadm
	I0417 19:24:51.057978  112272 command_runner.go:130] > kubectl
	I0417 19:24:51.057982  112272 command_runner.go:130] > kubelet
	I0417 19:24:51.058025  112272 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:24:51.058070  112272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:24:51.068441  112272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0417 19:24:51.086142  112272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:24:51.103960  112272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0417 19:24:51.121894  112272 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0417 19:24:51.125971  112272 command_runner.go:130] > 192.168.39.106	control-plane.minikube.internal
	I0417 19:24:51.126043  112272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:24:51.268082  112272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:24:51.283297  112272 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943 for IP: 192.168.39.106
	I0417 19:24:51.283321  112272 certs.go:194] generating shared ca certs ...
	I0417 19:24:51.283337  112272 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:24:51.283497  112272 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:24:51.283541  112272 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:24:51.283552  112272 certs.go:256] generating profile certs ...
	I0417 19:24:51.283654  112272 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/client.key
	I0417 19:24:51.283715  112272 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key.edfc69ee
	I0417 19:24:51.283762  112272 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key
	I0417 19:24:51.283773  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0417 19:24:51.283797  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0417 19:24:51.283819  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0417 19:24:51.283832  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0417 19:24:51.283844  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0417 19:24:51.283857  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0417 19:24:51.283873  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0417 19:24:51.283885  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0417 19:24:51.283931  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:24:51.283956  112272 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:24:51.283965  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:24:51.283991  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:24:51.284017  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:24:51.284042  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:24:51.284076  112272 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:24:51.284101  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.284115  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem -> /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.284127  112272 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.284687  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:24:51.310323  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:24:51.335738  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:24:51.359398  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:24:51.383117  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0417 19:24:51.407748  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 19:24:51.431230  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:24:51.456633  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/multinode-990943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:24:51.480980  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:24:51.506296  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:24:51.530907  112272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:24:51.561182  112272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:24:51.590071  112272 ssh_runner.go:195] Run: openssl version
	I0417 19:24:51.596840  112272 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0417 19:24:51.596940  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:24:51.633382  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650892  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650939  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.650998  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:24:51.667892  112272 command_runner.go:130] > b5213941
	I0417 19:24:51.667998  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:24:51.679891  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:24:51.693043  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.697937  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.698267  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.698337  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:24:51.706971  112272 command_runner.go:130] > 51391683
	I0417 19:24:51.707162  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:24:51.718585  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:24:51.732855  112272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.737987  112272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.738128  112272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.738208  112272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:24:51.743961  112272 command_runner.go:130] > 3ec20f2e
	I0417 19:24:51.744292  112272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:24:51.754616  112272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:24:51.764217  112272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:24:51.764241  112272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0417 19:24:51.764254  112272 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0417 19:24:51.764265  112272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0417 19:24:51.764272  112272 command_runner.go:130] > Access: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764277  112272 command_runner.go:130] > Modify: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764293  112272 command_runner.go:130] > Change: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764300  112272 command_runner.go:130] >  Birth: 2024-04-17 19:18:41.738805986 +0000
	I0417 19:24:51.764669  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:24:51.777834  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.778131  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:24:51.788056  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.788655  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:24:51.794758  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.794831  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:24:51.800388  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.800709  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:24:51.806533  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.806805  112272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:24:51.822078  112272 command_runner.go:130] > Certificate will not expire
	I0417 19:24:51.822169  112272 kubeadm.go:391] StartCluster: {Name:multinode-990943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0-rc.2 ClusterName:multinode-990943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.67 Port:0 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:24:51.822324  112272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:24:51.822383  112272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:24:51.880257  112272 command_runner.go:130] > acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2
	I0417 19:24:51.880309  112272 command_runner.go:130] > e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af
	I0417 19:24:51.880317  112272 command_runner.go:130] > dfd12fc760187cdc26809686c35b3e4460df331d96d22aa3d8093812c833263a
	I0417 19:24:51.880327  112272 command_runner.go:130] > 783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60
	I0417 19:24:51.880335  112272 command_runner.go:130] > bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed
	I0417 19:24:51.880340  112272 command_runner.go:130] > 3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42
	I0417 19:24:51.880349  112272 command_runner.go:130] > 1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674
	I0417 19:24:51.880358  112272 command_runner.go:130] > d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df
	I0417 19:24:51.880381  112272 command_runner.go:130] > e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112
	I0417 19:24:51.881505  112272 cri.go:89] found id: "acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2"
	I0417 19:24:51.881526  112272 cri.go:89] found id: "e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af"
	I0417 19:24:51.881551  112272 cri.go:89] found id: "dfd12fc760187cdc26809686c35b3e4460df331d96d22aa3d8093812c833263a"
	I0417 19:24:51.881560  112272 cri.go:89] found id: "783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60"
	I0417 19:24:51.881562  112272 cri.go:89] found id: "bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed"
	I0417 19:24:51.881570  112272 cri.go:89] found id: "3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42"
	I0417 19:24:51.881573  112272 cri.go:89] found id: "1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674"
	I0417 19:24:51.881576  112272 cri.go:89] found id: "d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df"
	I0417 19:24:51.881578  112272 cri.go:89] found id: "e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112"
	I0417 19:24:51.881584  112272 cri.go:89] found id: ""
	I0417 19:24:51.881631  112272 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.501749797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ea50583-826c-42b1-9168-9b41d9188ab1 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.502818257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaa4e168-7193-46af-aeaa-f1e0e590c009 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.503230221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382122503209801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaa4e168-7193-46af-aeaa-f1e0e590c009 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.503923381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d08c711-1457-45de-8c26-bd53155d94ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.504117236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d08c711-1457-45de-8c26-bd53155d94ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.504629375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d08c711-1457-45de-8c26-bd53155d94ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.545051083Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e9ba0ab4-3847-4b75-bf4e-80d30ef91a21 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.545720292Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-th5ps,Uid:f81066d9-6c6e-44ca-9c5c-3acfaa971eca,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381931540410099,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:25:04.494124259Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-990943,Uid:b2d754d00c72a58fa0595b1a7dca2e8d,Namespace:kube-system
,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897782580970,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2d754d00c72a58fa0595b1a7dca2e8d,kubernetes.io/config.seen: 2024-04-17T19:18:50.477929455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&PodSandboxMetadata{Name:kube-proxy-ppn8d,Uid:431b5f9e-8334-49b7-a686-4883f93e09cd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897772843407,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-
4883f93e09cd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:04.278957885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-990943,Uid:01d1c1380b7f94bdea842f02cd62168a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897769264660,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.106:8443,kubernetes.io/config.hash: 01d1c1380b7f94bdea842f02cd62168a,kubernetes.io/config.seen: 2024-04-17T19:18:50.477928436Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSan
dbox{Id:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-990943,Uid:29306408692bf92f3a30b7b7a05afb2c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897767193954,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29306408692bf92f3a30b7b7a05afb2c,kubernetes.io/config.seen: 2024-04-17T19:18:50.477930205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-990943,Uid:6fef634132628159fb78ee540ccabb2d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897764586012,Labels:map[string]string{component: etcd,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.106:2379,kubernetes.io/config.hash: 6fef634132628159fb78ee540ccabb2d,kubernetes.io/config.seen: 2024-04-17T19:18:50.477925524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0788ef97-1d5f-4ec6-9194-2fc80bba71a0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897756581143,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Ann
otations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-17T19:19:06.054653345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&PodSandboxMetadata{Name:kindnet-qk7wm,Uid:ebc8028e-ef63-42b6-aeaf-fa45a37945a4,Names
pace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381897742224157,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:04.279038037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-dt7cs,Uid:b8349fcd-3024-4211-a9c3-4547c8f67778,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713381891612254306,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,k8s-app: kube-dns,pod-temp
late-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:06.049748578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-th5ps,Uid:f81066d9-6c6e-44ca-9c5c-3acfaa971eca,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381588353351485,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:48.030057849Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0788ef97-1d5f-4ec6-9194-2fc80bba71a0,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1713381546361427506,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-17T19:19:06.054653345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&PodSandboxMetadata{Name:kindnet-qk7wm,Uid:ebc8028e-ef63-42b6-aeaf-fa45a37945a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381544603325088,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:04.279038037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&PodSandboxMetadata{Name:kube-proxy-ppn8d,Uid:431b5f9e-8334-49b
7-a686-4883f93e09cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381544596254933,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883f93e09cd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:19:04.278957885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-990943,Uid:01d1c1380b7f94bdea842f02cd62168a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381525067280236,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94
bdea842f02cd62168a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.106:8443,kubernetes.io/config.hash: 01d1c1380b7f94bdea842f02cd62168a,kubernetes.io/config.seen: 2024-04-17T19:18:44.590988453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&PodSandboxMetadata{Name:etcd-multinode-990943,Uid:6fef634132628159fb78ee540ccabb2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381525054598716,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.106:2379,kubernetes.io/config.hash: 6fef634132628159fb78ee540ccabb2d,kubernetes.io/config.seen: 2
024-04-17T19:18:44.590986423Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-990943,Uid:29306408692bf92f3a30b7b7a05afb2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381525044915092,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29306408692bf92f3a30b7b7a05afb2c,kubernetes.io/config.seen: 2024-04-17T19:18:44.590982960Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-990943,Uid:b2d754d00c72a58fa0595b1a7dca2e8d,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713381525038390014,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2d754d00c72a58fa0595b1a7dca2e8d,kubernetes.io/config.seen: 2024-04-17T19:18:44.590989391Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e9ba0ab4-3847-4b75-bf4e-80d30ef91a21 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.546717617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2632f2d-549a-4cb1-a86b-209623a70ddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.546798001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2632f2d-549a-4cb1-a86b-209623a70ddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.547388621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2632f2d-549a-4cb1-a86b-209623a70ddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.551685203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f60522e7-c292-46c6-a422-ac855ba4a7f5 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.551762857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f60522e7-c292-46c6-a422-ac855ba4a7f5 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.553077282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5237a816-c424-42a8-a54c-3dd11101dec6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.553697362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382122553675426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5237a816-c424-42a8-a54c-3dd11101dec6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.554261742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94d46f1e-d15f-4d69-85c3-e7e197326881 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.554338825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94d46f1e-d15f-4d69-85c3-e7e197326881 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.554773612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94d46f1e-d15f-4d69-85c3-e7e197326881 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.600072616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dc29cdc-b84d-4f4a-9290-74f21f1eee9e name=/runtime.v1.RuntimeService/Version
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.600172653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dc29cdc-b84d-4f4a-9290-74f21f1eee9e name=/runtime.v1.RuntimeService/Version
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.601381941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bc76239-aeea-47a1-95fa-4c03e7c4c88f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.601986487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382122601960927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133280,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bc76239-aeea-47a1-95fa-4c03e7c4c88f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.602884033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b9f817c-ab79-4d8c-a7e4-e4a9f2f314e8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.602963125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b9f817c-ab79-4d8c-a7e4-e4a9f2f314e8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:28:42 multinode-990943 crio[2847]: time="2024-04-17 19:28:42.606892365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1511f418ee73e69e338f53a10a6521122916dcf1864bbf7a3a23d1be0ed80bee,PodSandboxId:6cf240b67b42bde5a32988fea746cd2538dc1013ddd72a3fdbde091e39839943,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713381931707081075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713381904818997968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659,PodSandboxId:51acde14e0835e72028bd7e9d1cdb7b792b62eff15056c8f6ea0e690512df698,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713381898509669823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686-4883
f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9955b0de2d709d0ef72c1bd2d1e296d17679e737b8ff2965ef1cc8a1a5befd10,PodSandboxId:d455bc6c91767c6f4332076d1ecec7d0f32f2ada9f4b155a1df01516bdaa3135,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713381898505807928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},A
nnotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b,PodSandboxId:d12a0d734c8908201dc9cbc8741681073981acf2c4ed4ad3f9d758b1b233aece,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713381898258174212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121,PodSandboxId:747f3cfb1c4fec02933a9d8447478aed01e7c208faf1af9cfbd00a9a265acdb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713381898162605815,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string
]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815,PodSandboxId:5b45a72d7f3f4ffea462792f78309b648a2859b94cbb68eb668e3d5c7c19e6d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713381898124535257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da,PodSandboxId:7ee0d50e2e7f5fe7cf0087239db80787b79751168796f4a1f3d1fbebd079bbbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713381898051671945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: b2ef4863,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed,PodSandboxId:ed2174ba39d7d4cd6cf511b95b61059c37cc9be6b36690104cf66a25a21e86b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713381897956126059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash: a86a9639,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2,PodSandboxId:0849988a36abbd0bd0259a475d0ea6ba8b0b70d5858c13846ecb84a7e8f0ce04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713381891771321915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dt7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8349fcd-3024-4211-a9c3-4547c8f67778,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2ade23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e035a8242698175439f4005caf227626050ece5da03e74e5f0cf704718acfc6,PodSandboxId:c845a7834fd6659cd3594ca8ada577948e3d4c4340530b3df252f269109e1e28,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713381590127239205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-th5ps,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f81066d9-6c6e-44ca-9c5c-3acfaa971eca,},Annotations:map[string]string{io.kubernetes.container.hash: fb011ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e6af8d02377934fc9bb1f2ce0002b80446c16c6196a480ee93493b1d8418af,PodSandboxId:ac6b510e1043386cb8c2c617dcc693db14d4e2994207f864eb57dcd774147975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713381546541972420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0788ef97-1d5f-4ec6-9194-2fc80bba71a0,},Annotations:map[string]string{io.kubernetes.container.hash: ab0172c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60,PodSandboxId:3af55163936ed4a1a190d5890798fd80ed132bcd77c90842e093f7f6cc9b9c75,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713381545064443608,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qk7wm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ebc8028e-ef63-42b6-aeaf-fa45a37945a4,},Annotations:map[string]string{io.kubernetes.container.hash: 5835ada6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed,PodSandboxId:0374c589fb2a9c1ba98d939219a0919a103421657020d22cf93aecd23877cc27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713381544869611563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ppn8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431b5f9e-8334-49b7-a686
-4883f93e09cd,},Annotations:map[string]string{io.kubernetes.container.hash: 24f142a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df,PodSandboxId:b902f27763cb1163cadd7730784877595b1757ec5138af2d94a34e99a58a0db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713381525280797234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29306408692bf92f3a30b7b7a05afb2c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42,PodSandboxId:36e63773a52fe223a77aaa21179f278a63f5ac95a60fce819d383452a32033fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713381525336022775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d1c1380b7f94bdea842f02cd62168a,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b2ef4863,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674,PodSandboxId:12a91945c5f1d9018a884d01fe525bb2484618ddbb56a3202a622bdf17d62631,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713381525286804256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fef634132628159fb78ee540ccabb2d,},Annotations:map[string]string{io.kubernetes.container.hash:
a86a9639,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112,PodSandboxId:e3141b7fabb2851ca7803853475c35ee2bbac9ad393490e334596fbbe95bdd0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713381525177803857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-990943,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d754d00c72a58fa0595b1a7dca2e8d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b9f817c-ab79-4d8c-a7e4-e4a9f2f314e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1511f418ee73e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6cf240b67b42b       busybox-fc5497c4f-th5ps
	d3a9ac00162da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   0849988a36abb       coredns-7db6d8ff4d-dt7cs
	1ece9e23e1f0f       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      3 minutes ago       Running             kube-proxy                1                   51acde14e0835       kube-proxy-ppn8d
	9955b0de2d709       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   d455bc6c91767       storage-provisioner
	15ddd574dd705       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   d12a0d734c890       kindnet-qk7wm
	5bfe669ed90c6       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      3 minutes ago       Running             kube-controller-manager   1                   747f3cfb1c4fe       kube-controller-manager-multinode-990943
	f3af9849b5470       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      3 minutes ago       Running             kube-scheduler            1                   5b45a72d7f3f4       kube-scheduler-multinode-990943
	1c03974da7166       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      3 minutes ago       Running             kube-apiserver            1                   7ee0d50e2e7f5       kube-apiserver-multinode-990943
	08ce914dce8cc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   ed2174ba39d7d       etcd-multinode-990943
	acf951a1e9803       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Exited              coredns                   1                   0849988a36abb       coredns-7db6d8ff4d-dt7cs
	9e035a8242698       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   c845a7834fd66       busybox-fc5497c4f-th5ps
	e9e6af8d02377       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   ac6b510e10433       storage-provisioner
	783022d51342e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   3af55163936ed       kindnet-qk7wm
	bc33551c73203       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      9 minutes ago       Exited              kube-proxy                0                   0374c589fb2a9       kube-proxy-ppn8d
	3a4c87e81ce09       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      9 minutes ago       Exited              kube-apiserver            0                   36e63773a52fe       kube-apiserver-multinode-990943
	1b728d1ed6b5f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   12a91945c5f1d       etcd-multinode-990943
	d48a0f8541a47       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      9 minutes ago       Exited              kube-scheduler            0                   b902f27763cb1       kube-scheduler-multinode-990943
	e9c1a47ae3971       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      9 minutes ago       Exited              kube-controller-manager   0                   e3141b7fabb28       kube-controller-manager-multinode-990943
	
	
	==> coredns [acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39958 - 49103 "HINFO IN 1039202964132204482.5829226989441497131. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021711051s
	
	
	==> coredns [d3a9ac00162da7df34f6d42aae696cd4babdd1c3fe27135ee5a7ad20eb16fa3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33875 - 22971 "HINFO IN 6065393507524619077.3259373049489553806. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020849821s
	
	
	==> describe nodes <==
	Name:               multinode-990943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-990943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=multinode-990943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_18_51_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:18:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-990943
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:28:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:25:13 +0000   Wed, 17 Apr 2024 19:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    multinode-990943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e71579ac79364f6aa68f763fac6105cf
	  System UUID:                e71579ac-7936-4f6a-a68f-763fac6105cf
	  Boot ID:                    872cfc14-74a2-4216-ab40-42125acfa7ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-th5ps                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 coredns-7db6d8ff4d-dt7cs                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m38s
	  kube-system                 etcd-multinode-990943                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m52s
	  kube-system                 kindnet-qk7wm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m38s
	  kube-system                 kube-apiserver-multinode-990943             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-controller-manager-multinode-990943    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-proxy-ppn8d                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-scheduler-multinode-990943             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x8 over 9m58s)  kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x8 over 9m58s)  kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x7 over 9m58s)  kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m52s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m52s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s                  kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m52s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m39s                  node-controller  Node multinode-990943 event: Registered Node multinode-990943 in Controller
	  Normal  NodeReady                9m36s                  kubelet          Node multinode-990943 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet          Node multinode-990943 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m39s                  kubelet          Node multinode-990943 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m39s                  kubelet          Node multinode-990943 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-990943 event: Registered Node multinode-990943 in Controller
	  Normal  NodeReady                3m29s                  kubelet          Node multinode-990943 status is now: NodeReady
	
	
	Name:               multinode-990943-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-990943-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=multinode-990943
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_17T19_25_40_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:25:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-990943-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:26:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:27:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:27:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:27:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Apr 2024 19:26:10 +0000   Wed, 17 Apr 2024 19:27:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-990943-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81f580ad8d944061925e7b538916ee5d
	  System UUID:                81f580ad-8d94-4061-925e-7b538916ee5d
	  Boot ID:                    cb8ed231-3459-483f-bf78-f2d12e140631
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qnckz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kindnet-7c6bt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m6s
	  kube-system                 kube-proxy-5v4n8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m1s                 kube-proxy       
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m6s (x2 over 9m6s)  kubelet          Node multinode-990943-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s (x2 over 9m6s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m6s (x2 over 9m6s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m57s                kubelet          Node multinode-990943-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node multinode-990943-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node multinode-990943-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node multinode-990943-m02 event: Registered Node multinode-990943-m02 in Controller
	  Normal  NodeReady                2m54s                kubelet          Node multinode-990943-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                  node-controller  Node multinode-990943-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.068042] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061417] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.198536] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.123428] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.285686] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.332376] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.056575] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.023774] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.993595] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.061005] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.075308] kauditd_printk_skb: 15 callbacks suppressed
	[Apr17 19:19] systemd-fstab-generator[1465]: Ignoring "noauto" option for root device
	[  +0.116523] kauditd_printk_skb: 21 callbacks suppressed
	[ +44.348855] kauditd_printk_skb: 84 callbacks suppressed
	[Apr17 19:24] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.149830] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.167785] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.150593] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.318781] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +1.974855] systemd-fstab-generator[2932]: Ignoring "noauto" option for root device
	[  +6.687137] kauditd_printk_skb: 132 callbacks suppressed
	[Apr17 19:25] systemd-fstab-generator[3816]: Ignoring "noauto" option for root device
	[  +0.091770] kauditd_printk_skb: 62 callbacks suppressed
	[  +2.947551] systemd-fstab-generator[3937]: Ignoring "noauto" option for root device
	[  +7.688524] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [08ce914dce8cc7288eadf64ae4a401fa7461be3a2975998a2cd051824f1e1bed] <==
	{"level":"info","ts":"2024-04-17T19:24:58.341514Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:24:58.341597Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:24:58.341924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc switched to configuration voters=(1386996336873150412)"}
	{"level":"info","ts":"2024-04-17T19:24:58.341978Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","added-peer-id":"133f99d1dc1797cc","added-peer-peer-urls":["https://192.168.39.106:2380"]}
	{"level":"info","ts":"2024-04-17T19:24:58.342105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:24:58.342129Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:24:58.349921Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:24:58.350365Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-17T19:24:58.350387Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-17T19:24:58.350982Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:24:58.351026Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:24:59.620539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-04-17T19:24:59.620704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.620752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-04-17T19:24:59.62485Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:multinode-990943 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:24:59.625019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:24:59.628536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:24:59.629434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-04-17T19:24:59.631893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:24:59.631944Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:24:59.637817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [1b728d1ed6b5fa3627b2580a3f3315c077f169e012804a4ce2cce1aac23a4674] <==
	{"level":"info","ts":"2024-04-17T19:20:21.700388Z","caller":"traceutil/trace.go:171","msg":"trace[1059734923] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:634; }","duration":"132.414216ms","start":"2024-04-17T19:20:21.567949Z","end":"2024-04-17T19:20:21.700363Z","steps":["trace[1059734923] 'read index received'  (duration: 90.796845ms)","trace[1059734923] 'applied index is now lower than readState.Index'  (duration: 41.614603ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:20:21.700564Z","caller":"traceutil/trace.go:171","msg":"trace[1274647754] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"163.720009ms","start":"2024-04-17T19:20:21.536834Z","end":"2024-04-17T19:20:21.700554Z","steps":["trace[1274647754] 'process raft request'  (duration: 163.468653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:21.700966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.937421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:1935"}
	{"level":"info","ts":"2024-04-17T19:20:21.701067Z","caller":"traceutil/trace.go:171","msg":"trace[640670488] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:604; }","duration":"133.141383ms","start":"2024-04-17T19:20:21.567904Z","end":"2024-04-17T19:20:21.701045Z","steps":["trace[640670488] 'agreement among raft nodes before linearized reading'  (duration: 132.876675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.679691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.516444ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938274745669941087 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" mod_revision:591 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" value_size:507 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:20:27.679796Z","caller":"traceutil/trace.go:171","msg":"trace[1687982351] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"381.735594ms","start":"2024-04-17T19:20:27.298049Z","end":"2024-04-17T19:20:27.679785Z","steps":["trace[1687982351] 'read index received'  (duration: 156.957766ms)","trace[1687982351] 'applied index is now lower than readState.Index'  (duration: 224.7769ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:20:27.679868Z","caller":"traceutil/trace.go:171","msg":"trace[988375117] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"436.74226ms","start":"2024-04-17T19:20:27.243118Z","end":"2024-04-17T19:20:27.67986Z","steps":["trace[988375117] 'process raft request'  (duration: 212.150393ms)","trace[988375117] 'compare'  (duration: 223.09051ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:20:27.679957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:20:27.243101Z","time spent":"436.813845ms","remote":"127.0.0.1:50356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":568,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" mod_revision:591 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" value_size:507 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-990943-m02\" > >"}
	{"level":"warn","ts":"2024-04-17T19:20:27.680189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.133137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:2969"}
	{"level":"info","ts":"2024-04-17T19:20:27.680233Z","caller":"traceutil/trace.go:171","msg":"trace[92731692] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:643; }","duration":"382.200643ms","start":"2024-04-17T19:20:27.298025Z","end":"2024-04-17T19:20:27.680226Z","steps":["trace[92731692] 'agreement among raft nodes before linearized reading'  (duration: 382.096064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.680262Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:20:27.298013Z","time spent":"382.244262ms","remote":"127.0.0.1:50268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2991,"request content":"key:\"/registry/minions/multinode-990943-m03\" "}
	{"level":"warn","ts":"2024-04-17T19:20:27.680389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.946961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-17T19:20:27.680722Z","caller":"traceutil/trace.go:171","msg":"trace[1409258157] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:643; }","duration":"228.028022ms","start":"2024-04-17T19:20:27.452416Z","end":"2024-04-17T19:20:27.680444Z","steps":["trace[1409258157] 'agreement among raft nodes before linearized reading'  (duration: 227.92815ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:20:27.980196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.271971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-990943-m03\" ","response":"range_response_count:1 size:2969"}
	{"level":"info","ts":"2024-04-17T19:20:27.980315Z","caller":"traceutil/trace.go:171","msg":"trace[1093728173] range","detail":"{range_begin:/registry/minions/multinode-990943-m03; range_end:; response_count:1; response_revision:643; }","duration":"181.420691ms","start":"2024-04-17T19:20:27.798875Z","end":"2024-04-17T19:20:27.980296Z","steps":["trace[1093728173] 'range keys from in-memory index tree'  (duration: 181.098235ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:23:16.982885Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-17T19:23:16.983018Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-990943","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	{"level":"warn","ts":"2024-04-17T19:23:16.983219Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:16.983355Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:17.053093Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T19:23:17.053181Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-17T19:23:17.053275Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"133f99d1dc1797cc","current-leader-member-id":"133f99d1dc1797cc"}
	{"level":"info","ts":"2024-04-17T19:23:17.05572Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:23:17.055953Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-04-17T19:23:17.055998Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-990943","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	
	
	==> kernel <==
	 19:28:43 up 10 min,  0 users,  load average: 0.15, 0.16, 0.10
	Linux multinode-990943 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [15ddd574dd7052bca24ad5c137912cf9ba1ac34605c041676c87c8de356b1c9b] <==
	I0417 19:27:41.623887       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:27:51.637332       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:27:51.637375       1 main.go:227] handling current node
	I0417 19:27:51.637413       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:27:51.637420       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:28:01.644796       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:28:01.644839       1 main.go:227] handling current node
	I0417 19:28:01.644856       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:28:01.644862       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:28:11.653780       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:28:11.653824       1 main.go:227] handling current node
	I0417 19:28:11.653838       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:28:11.653848       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:28:21.667330       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:28:21.667560       1 main.go:227] handling current node
	I0417 19:28:21.667595       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:28:21.667614       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:28:31.680318       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:28:31.680533       1 main.go:227] handling current node
	I0417 19:28:31.680583       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:28:31.680604       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:28:41.689095       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:28:41.689382       1 main.go:227] handling current node
	I0417 19:28:41.689431       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:28:41.689506       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [783022d51342eb0c01428a003d8c923edb7c50b94ed6aed76125c59614a1dc60] <==
	I0417 19:22:36.091358       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:22:46.102555       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:22:46.102599       1 main.go:227] handling current node
	I0417 19:22:46.102611       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:22:46.102618       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:22:46.102721       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:22:46.102749       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:22:56.114973       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:22:56.115052       1 main.go:227] handling current node
	I0417 19:22:56.115064       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:22:56.115070       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:22:56.115179       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:22:56.115206       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:23:06.136205       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:23:06.136286       1 main.go:227] handling current node
	I0417 19:23:06.136301       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:23:06.136309       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:23:06.136678       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:23:06.136719       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	I0417 19:23:16.155340       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0417 19:23:16.155395       1 main.go:227] handling current node
	I0417 19:23:16.155411       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0417 19:23:16.155419       1 main.go:250] Node multinode-990943-m02 has CIDR [10.244.1.0/24] 
	I0417 19:23:16.156646       1 main.go:223] Handling node with IPs: map[192.168.39.67:{}]
	I0417 19:23:16.156690       1 main.go:250] Node multinode-990943-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1c03974da7166a2c371175c0a633891e6bfbf41aac4e6ecd0a9a41d62b0282da] <==
	I0417 19:25:01.389606       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:25:01.465185       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:25:01.465287       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0417 19:25:01.465384       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:25:01.465771       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:25:01.466234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:25:01.476821       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:25:01.476970       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:25:01.477000       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:25:01.477007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:25:01.477011       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:25:01.467542       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 19:25:01.478866       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:25:01.479633       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:25:01.479675       1 policy_source.go:224] refreshing policies
	I0417 19:25:01.493554       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:25:01.515346       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 19:25:02.362930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0417 19:25:04.182361       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 19:25:04.303223       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 19:25:04.316821       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 19:25:04.395018       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:25:04.407222       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 19:25:13.917585       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:25:13.935778       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3a4c87e81ce0921f4c5da69aae21f79778767f0f08f0f2c619c0f16d1cb63f42] <==
	E0417 19:23:17.002064       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.003310       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005375       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005538       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005611       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.005666       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.006351       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.000191       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009168       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009668       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009747       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009802       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.009921       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.010128       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.010219       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.011654       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012158       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012568       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012748       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc0043b9f60)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.012990       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013105       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013142       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0417 19:23:17.013903       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0417 19:23:17.014931       1 controller.go:157] Shutting down quota evaluator
	I0417 19:23:17.015045       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [5bfe669ed90c672854bd59d10375e2ff37fe256ae5ce75bb22b593b58e995121] <==
	I0417 19:25:39.464677       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m02\" does not exist"
	I0417 19:25:39.482048       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m02" podCIDRs=["10.244.1.0/24"]
	I0417 19:25:41.355978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.668µs"
	I0417 19:25:41.408653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.34µs"
	I0417 19:25:41.424735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.522µs"
	I0417 19:25:41.440936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.082µs"
	I0417 19:25:41.449807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.733µs"
	I0417 19:25:41.451627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.139µs"
	I0417 19:25:48.167412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:25:48.187670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.556µs"
	I0417 19:25:48.209114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.584µs"
	I0417 19:25:50.553262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.555615ms"
	I0417 19:25:50.553523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.615µs"
	I0417 19:26:06.382889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:26:07.377843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:26:07.378950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:26:07.409819       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.2.0/24"]
	I0417 19:26:15.664997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:26:21.368428       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:27:04.116129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.460121ms"
	I0417 19:27:04.116233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.15µs"
	I0417 19:27:13.992185       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-q8gbt"
	I0417 19:27:14.019581       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-q8gbt"
	I0417 19:27:14.020629       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-58bgz"
	I0417 19:27:14.048062       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-58bgz"
	
	
	==> kube-controller-manager [e9c1a47ae3971d112b5f3a47b9d525fdc7d6f54d07bf0f3c76f0d2ed9c2f6112] <==
	I0417 19:19:36.715915       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m02\" does not exist"
	I0417 19:19:36.729103       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m02" podCIDRs=["10.244.1.0/24"]
	I0417 19:19:38.198075       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-990943-m02"
	I0417 19:19:45.835979       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:19:48.030182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.943376ms"
	I0417 19:19:48.050904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.585021ms"
	I0417 19:19:48.051257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.484µs"
	I0417 19:19:48.056610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.168µs"
	I0417 19:19:50.587139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.534434ms"
	I0417 19:19:50.587611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.241µs"
	I0417 19:19:50.848210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.315899ms"
	I0417 19:19:50.848320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.585µs"
	I0417 19:20:21.703827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:20:21.704209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:20:21.715998       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.2.0/24"]
	I0417 19:20:23.219694       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-990943-m03"
	I0417 19:20:30.446215       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:00.640619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:01.783651       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-990943-m03\" does not exist"
	I0417 19:21:01.783846       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:01.794714       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-990943-m03" podCIDRs=["10.244.3.0/24"]
	I0417 19:21:10.402512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m02"
	I0417 19:21:48.274066       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-990943-m03"
	I0417 19:21:48.341627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.956152ms"
	I0417 19:21:48.341837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.139µs"
	
	
	==> kube-proxy [1ece9e23e1f0f891658c2bbbc53aa61c2071c50ebdaeee0ddf9d31f2da9d9659] <==
	I0417 19:25:00.156073       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:25:01.481345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I0417 19:25:01.539865       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:25:01.539989       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:25:01.540008       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:25:01.542804       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:25:01.543105       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:25:01.543320       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:25:01.544834       1 config.go:192] "Starting service config controller"
	I0417 19:25:01.544884       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:25:01.544921       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:25:01.544945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:25:01.545566       1 config.go:319] "Starting node config controller"
	I0417 19:25:01.547592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:25:01.645060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:25:01.645132       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:25:01.647968       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [bc33551c73203adda563e6a2fc726fcc8f0b70a1d5e143e231c35359fc0fc2ed] <==
	I0417 19:19:05.093788       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:19:05.102114       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I0417 19:19:05.146066       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:19:05.146093       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:19:05.146107       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:19:05.148766       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:19:05.149025       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:19:05.149240       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:19:05.150298       1 config.go:192] "Starting service config controller"
	I0417 19:19:05.150342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:19:05.150382       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:19:05.150398       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:19:05.150991       1 config.go:319] "Starting node config controller"
	I0417 19:19:05.151029       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:19:05.251235       1 shared_informer.go:320] Caches are synced for node config
	I0417 19:19:05.251326       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:19:05.251336       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d48a0f8541a47f9407b67db7560494310f8fc331fd7ece875ec39f5f5845e9df] <==
	E0417 19:18:47.994937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0417 19:18:47.995016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:47.995104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:18:48.805428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0417 19:18:48.805505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0417 19:18:49.028080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:49.028234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:18:49.049163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:18:49.049596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:18:49.055541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:18:49.055642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:18:49.058546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 19:18:49.059111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 19:18:49.061355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:18:49.061426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0417 19:18:49.104663       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0417 19:18:49.104771       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:18:49.139342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0417 19:18:49.139443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0417 19:18:49.197715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:18:49.197800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:18:49.286419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0417 19:18:49.286523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0417 19:18:51.684039       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0417 19:23:16.992110       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f3af9849b5470fed8d5f20d35475074bf104e50ecdc430fe81d4f71943ade815] <==
	I0417 19:24:59.408352       1 serving.go:380] Generated self-signed cert in-memory
	W0417 19:25:01.384927       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0417 19:25:01.384970       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 19:25:01.384982       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0417 19:25:01.384987       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0417 19:25:01.425007       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0417 19:25:01.425048       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:25:01.428946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0417 19:25:01.429078       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0417 19:25:01.429114       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:25:01.429130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:25:01.529743       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572055    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/431b5f9e-8334-49b7-a686-4883f93e09cd-lib-modules\") pod \"kube-proxy-ppn8d\" (UID: \"431b5f9e-8334-49b7-a686-4883f93e09cd\") " pod="kube-system/kube-proxy-ppn8d"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572146    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/431b5f9e-8334-49b7-a686-4883f93e09cd-xtables-lock\") pod \"kube-proxy-ppn8d\" (UID: \"431b5f9e-8334-49b7-a686-4883f93e09cd\") " pod="kube-system/kube-proxy-ppn8d"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572167    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-xtables-lock\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572181    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-lib-modules\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572206    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0788ef97-1d5f-4ec6-9194-2fc80bba71a0-tmp\") pod \"storage-provisioner\" (UID: \"0788ef97-1d5f-4ec6-9194-2fc80bba71a0\") " pod="kube-system/storage-provisioner"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.572241    3823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ebc8028e-ef63-42b6-aeaf-fa45a37945a4-cni-cfg\") pod \"kindnet-qk7wm\" (UID: \"ebc8028e-ef63-42b6-aeaf-fa45a37945a4\") " pod="kube-system/kindnet-qk7wm"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: E0417 19:25:04.785706    3823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-990943\" already exists" pod="kube-system/kube-apiserver-multinode-990943"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: E0417 19:25:04.793218    3823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-990943\" already exists" pod="kube-system/kube-controller-manager-multinode-990943"
	Apr 17 19:25:04 multinode-990943 kubelet[3823]: I0417 19:25:04.796067    3823 scope.go:117] "RemoveContainer" containerID="acf951a1e980356f39898e558a17924610f81846b74afe2bb3e8500366e88de2"
	Apr 17 19:25:14 multinode-990943 kubelet[3823]: I0417 19:25:14.516183    3823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 17 19:26:03 multinode-990943 kubelet[3823]: E0417 19:26:03.614855    3823 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:26:03 multinode-990943 kubelet[3823]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 19:27:03 multinode-990943 kubelet[3823]: E0417 19:27:03.613664    3823 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:27:03 multinode-990943 kubelet[3823]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:27:03 multinode-990943 kubelet[3823]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:27:03 multinode-990943 kubelet[3823]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:27:03 multinode-990943 kubelet[3823]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 19:28:03 multinode-990943 kubelet[3823]: E0417 19:28:03.614400    3823 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 19:28:03 multinode-990943 kubelet[3823]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 19:28:03 multinode-990943 kubelet[3823]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 19:28:03 multinode-990943 kubelet[3823]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 19:28:03 multinode-990943 kubelet[3823]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0417 19:28:42.176717  113826 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18665-75973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-990943 -n multinode-990943
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-990943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                    
x
+
TestPreload (168.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-590764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0417 19:33:19.319061   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-590764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.299559707s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590764 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-590764 image pull gcr.io/k8s-minikube/busybox: (1.701938017s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-590764
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-590764: (7.320275389s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-590764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-590764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.94834641s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590764 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-17 19:35:12.000675396 +0000 UTC m=+5776.722663591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-590764 -n test-preload-590764
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-590764 logs -n 25: (1.252367902s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943 sudo cat                                       | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943.txt                          |                      |         |                |                     |                     |
	| cp      | multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt                       | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m02:/home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt |                      |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n                                                                 | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | multinode-990943-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-990943 ssh -n multinode-990943-m02 sudo cat                                   | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	|         | /home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt                      |                      |         |                |                     |                     |
	| node    | multinode-990943 node stop m03                                                          | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:20 UTC |
	| node    | multinode-990943 node start                                                             | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:20 UTC | 17 Apr 24 19:21 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| stop    | -p multinode-990943                                                                     | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:21 UTC |                     |
	| start   | -p multinode-990943                                                                     | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:23 UTC | 17 Apr 24 19:26 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC |                     |
	| node    | multinode-990943 node delete                                                            | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC | 17 Apr 24 19:26 UTC |
	|         | m03                                                                                     |                      |         |                |                     |                     |
	| stop    | multinode-990943 stop                                                                   | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:26 UTC |                     |
	| start   | -p multinode-990943                                                                     | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:28 UTC | 17 Apr 24 19:31 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | list -p multinode-990943                                                                | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:31 UTC |                     |
	| start   | -p multinode-990943-m02                                                                 | multinode-990943-m02 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:31 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| start   | -p multinode-990943-m03                                                                 | multinode-990943-m03 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:31 UTC | 17 Apr 24 19:32 UTC |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | add -p multinode-990943                                                                 | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:32 UTC |                     |
	| delete  | -p multinode-990943-m03                                                                 | multinode-990943-m03 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:32 UTC | 17 Apr 24 19:32 UTC |
	| delete  | -p multinode-990943                                                                     | multinode-990943     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:32 UTC | 17 Apr 24 19:32 UTC |
	| start   | -p test-preload-590764                                                                  | test-preload-590764  | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:32 UTC | 17 Apr 24 19:34 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |                |                     |                     |
	| image   | test-preload-590764 image pull                                                          | test-preload-590764  | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:34 UTC | 17 Apr 24 19:34 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |                |                     |                     |
	| stop    | -p test-preload-590764                                                                  | test-preload-590764  | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:34 UTC | 17 Apr 24 19:34 UTC |
	| start   | -p test-preload-590764                                                                  | test-preload-590764  | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:34 UTC | 17 Apr 24 19:35 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |                |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| image   | test-preload-590764 image list                                                          | test-preload-590764  | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:35 UTC | 17 Apr 24 19:35 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:34:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:34:13.872733  115894 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:34:13.872842  115894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:34:13.872850  115894 out.go:304] Setting ErrFile to fd 2...
	I0417 19:34:13.872855  115894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:34:13.873040  115894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:34:13.873576  115894 out.go:298] Setting JSON to false
	I0417 19:34:13.874466  115894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11802,"bootTime":1713370652,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:34:13.874528  115894 start.go:139] virtualization: kvm guest
	I0417 19:34:13.876968  115894 out.go:177] * [test-preload-590764] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:34:13.878585  115894 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:34:13.878632  115894 notify.go:220] Checking for updates...
	I0417 19:34:13.880065  115894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:34:13.881815  115894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:34:13.883387  115894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:34:13.884894  115894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:34:13.886316  115894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:34:13.888181  115894 config.go:182] Loaded profile config "test-preload-590764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0417 19:34:13.888629  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:34:13.888678  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:34:13.903357  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0417 19:34:13.903768  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:34:13.904297  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:34:13.904330  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:34:13.904685  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:34:13.904902  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:13.906968  115894 out.go:177] * Kubernetes 1.30.0-rc.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0-rc.2
	I0417 19:34:13.908431  115894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:34:13.908748  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:34:13.908815  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:34:13.923747  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0417 19:34:13.924123  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:34:13.924612  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:34:13.924634  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:34:13.924989  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:34:13.925173  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:13.959684  115894 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 19:34:13.961357  115894 start.go:297] selected driver: kvm2
	I0417 19:34:13.961371  115894 start.go:901] validating driver "kvm2" against &{Name:test-preload-590764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-590764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:34:13.961487  115894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:34:13.962216  115894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:34:13.962323  115894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:34:13.976587  115894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:34:13.976951  115894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:34:13.977033  115894 cni.go:84] Creating CNI manager for ""
	I0417 19:34:13.977055  115894 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:34:13.977133  115894 start.go:340] cluster config:
	{Name:test-preload-590764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-590764 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:34:13.977251  115894 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:34:13.979212  115894 out.go:177] * Starting "test-preload-590764" primary control-plane node in "test-preload-590764" cluster
	I0417 19:34:13.980857  115894 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0417 19:34:13.999841  115894 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0417 19:34:13.999872  115894 cache.go:56] Caching tarball of preloaded images
	I0417 19:34:14.000008  115894 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0417 19:34:14.001868  115894 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0417 19:34:14.003409  115894 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0417 19:34:14.028806  115894 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0417 19:34:18.459438  115894 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0417 19:34:18.459540  115894 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0417 19:34:19.311721  115894 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0417 19:34:19.311849  115894 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/config.json ...
	I0417 19:34:19.312078  115894 start.go:360] acquireMachinesLock for test-preload-590764: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:34:19.312149  115894 start.go:364] duration metric: took 47.385µs to acquireMachinesLock for "test-preload-590764"
	I0417 19:34:19.312170  115894 start.go:96] Skipping create...Using existing machine configuration
	I0417 19:34:19.312197  115894 fix.go:54] fixHost starting: 
	I0417 19:34:19.312510  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:34:19.312549  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:34:19.326991  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0417 19:34:19.327416  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:34:19.327985  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:34:19.328012  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:34:19.328377  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:34:19.328621  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:19.328833  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetState
	I0417 19:34:19.330364  115894 fix.go:112] recreateIfNeeded on test-preload-590764: state=Stopped err=<nil>
	I0417 19:34:19.330387  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	W0417 19:34:19.330551  115894 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 19:34:19.332749  115894 out.go:177] * Restarting existing kvm2 VM for "test-preload-590764" ...
	I0417 19:34:19.334242  115894 main.go:141] libmachine: (test-preload-590764) Calling .Start
	I0417 19:34:19.334407  115894 main.go:141] libmachine: (test-preload-590764) Ensuring networks are active...
	I0417 19:34:19.335105  115894 main.go:141] libmachine: (test-preload-590764) Ensuring network default is active
	I0417 19:34:19.335450  115894 main.go:141] libmachine: (test-preload-590764) Ensuring network mk-test-preload-590764 is active
	I0417 19:34:19.335806  115894 main.go:141] libmachine: (test-preload-590764) Getting domain xml...
	I0417 19:34:19.336466  115894 main.go:141] libmachine: (test-preload-590764) Creating domain...
	I0417 19:34:20.508789  115894 main.go:141] libmachine: (test-preload-590764) Waiting to get IP...
	I0417 19:34:20.509700  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:20.510042  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:20.510112  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:20.510013  115941 retry.go:31] will retry after 252.43397ms: waiting for machine to come up
	I0417 19:34:20.764610  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:20.765103  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:20.765134  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:20.765039  115941 retry.go:31] will retry after 255.885833ms: waiting for machine to come up
	I0417 19:34:21.022611  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:21.023076  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:21.023106  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:21.023019  115941 retry.go:31] will retry after 402.260244ms: waiting for machine to come up
	I0417 19:34:21.426588  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:21.426987  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:21.427017  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:21.426935  115941 retry.go:31] will retry after 494.83855ms: waiting for machine to come up
	I0417 19:34:21.923608  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:21.924092  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:21.924124  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:21.924031  115941 retry.go:31] will retry after 637.123534ms: waiting for machine to come up
	I0417 19:34:22.562776  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:22.563195  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:22.563220  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:22.563153  115941 retry.go:31] will retry after 802.387397ms: waiting for machine to come up
	I0417 19:34:23.367095  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:23.367614  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:23.367646  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:23.367563  115941 retry.go:31] will retry after 1.061744819s: waiting for machine to come up
	I0417 19:34:24.430949  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:24.431414  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:24.431447  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:24.431367  115941 retry.go:31] will retry after 1.073003576s: waiting for machine to come up
	I0417 19:34:25.505591  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:25.506011  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:25.506041  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:25.505967  115941 retry.go:31] will retry after 1.17409915s: waiting for machine to come up
	I0417 19:34:26.682302  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:26.682676  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:26.682706  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:26.682595  115941 retry.go:31] will retry after 2.187412424s: waiting for machine to come up
	I0417 19:34:28.873080  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:28.875308  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:28.875334  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:28.873486  115941 retry.go:31] will retry after 2.113576259s: waiting for machine to come up
	I0417 19:34:30.989030  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:30.989370  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:30.989398  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:30.989340  115941 retry.go:31] will retry after 2.221767079s: waiting for machine to come up
	I0417 19:34:33.213741  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:33.214134  115894 main.go:141] libmachine: (test-preload-590764) DBG | unable to find current IP address of domain test-preload-590764 in network mk-test-preload-590764
	I0417 19:34:33.214156  115894 main.go:141] libmachine: (test-preload-590764) DBG | I0417 19:34:33.214107  115941 retry.go:31] will retry after 3.858963927s: waiting for machine to come up
	I0417 19:34:37.075865  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.076252  115894 main.go:141] libmachine: (test-preload-590764) Found IP for machine: 192.168.39.86
	I0417 19:34:37.076284  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has current primary IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.076293  115894 main.go:141] libmachine: (test-preload-590764) Reserving static IP address...
	I0417 19:34:37.076718  115894 main.go:141] libmachine: (test-preload-590764) Reserved static IP address: 192.168.39.86
	I0417 19:34:37.076745  115894 main.go:141] libmachine: (test-preload-590764) Waiting for SSH to be available...
	I0417 19:34:37.076783  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "test-preload-590764", mac: "52:54:00:f8:1d:b3", ip: "192.168.39.86"} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.076814  115894 main.go:141] libmachine: (test-preload-590764) DBG | skip adding static IP to network mk-test-preload-590764 - found existing host DHCP lease matching {name: "test-preload-590764", mac: "52:54:00:f8:1d:b3", ip: "192.168.39.86"}
	I0417 19:34:37.076837  115894 main.go:141] libmachine: (test-preload-590764) DBG | Getting to WaitForSSH function...
	I0417 19:34:37.078642  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.078950  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.078966  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.079160  115894 main.go:141] libmachine: (test-preload-590764) DBG | Using SSH client type: external
	I0417 19:34:37.079173  115894 main.go:141] libmachine: (test-preload-590764) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa (-rw-------)
	I0417 19:34:37.079252  115894 main.go:141] libmachine: (test-preload-590764) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 19:34:37.079267  115894 main.go:141] libmachine: (test-preload-590764) DBG | About to run SSH command:
	I0417 19:34:37.079303  115894 main.go:141] libmachine: (test-preload-590764) DBG | exit 0
	I0417 19:34:37.204826  115894 main.go:141] libmachine: (test-preload-590764) DBG | SSH cmd err, output: <nil>: 
	I0417 19:34:37.205175  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetConfigRaw
	I0417 19:34:37.205808  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetIP
	I0417 19:34:37.208098  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.208461  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.208492  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.208679  115894 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/config.json ...
	I0417 19:34:37.208902  115894 machine.go:94] provisionDockerMachine start ...
	I0417 19:34:37.208923  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:37.209154  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.211528  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.211890  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.211923  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.212031  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:37.212243  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.212394  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.212486  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:37.212612  115894 main.go:141] libmachine: Using SSH client type: native
	I0417 19:34:37.212829  115894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0417 19:34:37.212842  115894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:34:37.321468  115894 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0417 19:34:37.321491  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetMachineName
	I0417 19:34:37.321761  115894 buildroot.go:166] provisioning hostname "test-preload-590764"
	I0417 19:34:37.321795  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetMachineName
	I0417 19:34:37.321986  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.324635  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.324983  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.325010  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.325129  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:37.325336  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.325483  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.325589  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:37.325734  115894 main.go:141] libmachine: Using SSH client type: native
	I0417 19:34:37.325930  115894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0417 19:34:37.325957  115894 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-590764 && echo "test-preload-590764" | sudo tee /etc/hostname
	I0417 19:34:37.447535  115894 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-590764
	
	I0417 19:34:37.447576  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.450487  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.450874  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.450901  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.451077  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:37.451293  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.451454  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.451609  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:37.451758  115894 main.go:141] libmachine: Using SSH client type: native
	I0417 19:34:37.451989  115894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0417 19:34:37.452011  115894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-590764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-590764/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-590764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:34:37.568614  115894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:34:37.568652  115894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 19:34:37.568674  115894 buildroot.go:174] setting up certificates
	I0417 19:34:37.568683  115894 provision.go:84] configureAuth start
	I0417 19:34:37.568693  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetMachineName
	I0417 19:34:37.568995  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetIP
	I0417 19:34:37.571508  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.571814  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.571852  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.571976  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.573970  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.574261  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.574295  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.574361  115894 provision.go:143] copyHostCerts
	I0417 19:34:37.574428  115894 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 19:34:37.574453  115894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:34:37.574523  115894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 19:34:37.574606  115894 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 19:34:37.574614  115894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:34:37.574637  115894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 19:34:37.574687  115894 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 19:34:37.574694  115894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:34:37.574713  115894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 19:34:37.574767  115894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.test-preload-590764 san=[127.0.0.1 192.168.39.86 localhost minikube test-preload-590764]
	I0417 19:34:37.660227  115894 provision.go:177] copyRemoteCerts
	I0417 19:34:37.660287  115894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:34:37.660316  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.662966  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.663300  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.663323  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.663508  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:37.663740  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.663925  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:37.664046  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:34:37.748014  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:34:37.773915  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0417 19:34:37.799107  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 19:34:37.824116  115894 provision.go:87] duration metric: took 255.41742ms to configureAuth
	I0417 19:34:37.824147  115894 buildroot.go:189] setting minikube options for container-runtime
	I0417 19:34:37.824333  115894 config.go:182] Loaded profile config "test-preload-590764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0417 19:34:37.824426  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:37.826907  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.827381  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:37.827409  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:37.827618  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:37.827817  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.828014  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:37.828164  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:37.828304  115894 main.go:141] libmachine: Using SSH client type: native
	I0417 19:34:37.828461  115894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0417 19:34:37.828478  115894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:34:38.093694  115894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:34:38.093720  115894 machine.go:97] duration metric: took 884.803583ms to provisionDockerMachine
	I0417 19:34:38.093732  115894 start.go:293] postStartSetup for "test-preload-590764" (driver="kvm2")
	I0417 19:34:38.093743  115894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:34:38.093765  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:38.094052  115894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:34:38.094075  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:38.096761  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.097110  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:38.097143  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.097268  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:38.097434  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:38.097607  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:38.097743  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:34:38.179859  115894 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:34:38.184244  115894 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:34:38.184271  115894 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:34:38.184349  115894 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:34:38.184421  115894 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:34:38.184549  115894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:34:38.194004  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:34:38.218491  115894 start.go:296] duration metric: took 124.744349ms for postStartSetup
	I0417 19:34:38.218537  115894 fix.go:56] duration metric: took 18.906349166s for fixHost
	I0417 19:34:38.218562  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:38.221366  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.221654  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:38.221674  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.221854  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:38.222014  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:38.222169  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:38.222306  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:38.222520  115894 main.go:141] libmachine: Using SSH client type: native
	I0417 19:34:38.222712  115894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0417 19:34:38.222726  115894 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0417 19:34:38.329459  115894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713382478.299194215
	
	I0417 19:34:38.329494  115894 fix.go:216] guest clock: 1713382478.299194215
	I0417 19:34:38.329502  115894 fix.go:229] Guest: 2024-04-17 19:34:38.299194215 +0000 UTC Remote: 2024-04-17 19:34:38.218541328 +0000 UTC m=+24.392713977 (delta=80.652887ms)
	I0417 19:34:38.329522  115894 fix.go:200] guest clock delta is within tolerance: 80.652887ms
	I0417 19:34:38.329530  115894 start.go:83] releasing machines lock for "test-preload-590764", held for 19.017367868s
	I0417 19:34:38.329547  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:38.329836  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetIP
	I0417 19:34:38.332317  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.332599  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:38.332626  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.332786  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:38.333290  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:38.333472  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:34:38.333575  115894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:34:38.333611  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:38.333710  115894 ssh_runner.go:195] Run: cat /version.json
	I0417 19:34:38.333737  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:34:38.336139  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.336454  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.336487  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:38.336512  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.336628  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:38.336817  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:38.336886  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:38.336912  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:38.337118  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:34:38.337120  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:38.337295  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:34:38.337292  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:34:38.337441  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:34:38.337541  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:34:38.423274  115894 ssh_runner.go:195] Run: systemctl --version
	I0417 19:34:38.447068  115894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:34:38.591203  115894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 19:34:38.597254  115894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:34:38.597334  115894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:34:38.613921  115894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 19:34:38.613951  115894 start.go:494] detecting cgroup driver to use...
	I0417 19:34:38.614033  115894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:34:38.631656  115894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:34:38.646812  115894 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:34:38.646868  115894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:34:38.661561  115894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:34:38.676072  115894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:34:38.794174  115894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:34:38.958814  115894 docker.go:233] disabling docker service ...
	I0417 19:34:38.958884  115894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:34:38.973804  115894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:34:38.986385  115894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:34:39.095167  115894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:34:39.208506  115894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:34:39.222894  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:34:39.241838  115894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0417 19:34:39.241910  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.252323  115894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:34:39.252394  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.262446  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.272720  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.282921  115894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:34:39.293252  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.303278  115894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.320964  115894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:34:39.331140  115894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:34:39.340446  115894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 19:34:39.340498  115894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 19:34:39.353042  115894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:34:39.363533  115894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:34:39.483690  115894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:34:39.617399  115894 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:34:39.617478  115894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:34:39.622195  115894 start.go:562] Will wait 60s for crictl version
	I0417 19:34:39.622242  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:39.625979  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:34:39.664905  115894 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:34:39.664976  115894 ssh_runner.go:195] Run: crio --version
	I0417 19:34:39.692727  115894 ssh_runner.go:195] Run: crio --version
	I0417 19:34:39.723166  115894 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0417 19:34:39.724749  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetIP
	I0417 19:34:39.727547  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:39.727909  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:34:39.727940  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:34:39.728132  115894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:34:39.732187  115894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:34:39.744611  115894 kubeadm.go:877] updating cluster {Name:test-preload-590764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-590764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:34:39.744724  115894 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0417 19:34:39.744767  115894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:34:39.780391  115894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0417 19:34:39.780451  115894 ssh_runner.go:195] Run: which lz4
	I0417 19:34:39.784357  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0417 19:34:39.788698  115894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0417 19:34:39.788724  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0417 19:34:41.470875  115894 crio.go:462] duration metric: took 1.6865444s to copy over tarball
	I0417 19:34:41.470955  115894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0417 19:34:43.874139  115894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.403146018s)
	I0417 19:34:43.875288  115894 crio.go:469] duration metric: took 2.404377884s to extract the tarball
	I0417 19:34:43.875324  115894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 19:34:43.917068  115894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:34:43.959452  115894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0417 19:34:43.959483  115894 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0417 19:34:43.959567  115894 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:34:43.959597  115894 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0417 19:34:43.959571  115894 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0417 19:34:43.959624  115894 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0417 19:34:43.959635  115894 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0417 19:34:43.959746  115894 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0417 19:34:43.959600  115894 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0417 19:34:43.959829  115894 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0417 19:34:43.961226  115894 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0417 19:34:43.961243  115894 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0417 19:34:43.961251  115894 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0417 19:34:43.961246  115894 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0417 19:34:43.961248  115894 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0417 19:34:43.961231  115894 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:34:43.961226  115894 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0417 19:34:43.961378  115894 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0417 19:34:44.112040  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0417 19:34:44.112585  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0417 19:34:44.118726  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0417 19:34:44.121706  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0417 19:34:44.131243  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0417 19:34:44.136924  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0417 19:34:44.157461  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0417 19:34:44.254707  115894 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0417 19:34:44.254761  115894 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0417 19:34:44.254771  115894 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0417 19:34:44.254797  115894 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0417 19:34:44.254811  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.254828  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.292507  115894 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0417 19:34:44.292562  115894 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0417 19:34:44.292613  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.306136  115894 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0417 19:34:44.306185  115894 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0417 19:34:44.306206  115894 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0417 19:34:44.306224  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.306245  115894 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0417 19:34:44.306276  115894 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0417 19:34:44.306287  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.306297  115894 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0417 19:34:44.306305  115894 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0417 19:34:44.306318  115894 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0417 19:34:44.306357  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0417 19:34:44.306376  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.306335  115894 ssh_runner.go:195] Run: which crictl
	I0417 19:34:44.306388  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0417 19:34:44.306496  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0417 19:34:44.321378  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0417 19:34:44.321568  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0417 19:34:44.411884  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0417 19:34:44.411921  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0417 19:34:44.411992  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0417 19:34:44.412007  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0417 19:34:44.412013  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0417 19:34:44.412072  115894 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0417 19:34:44.412125  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0417 19:34:44.412177  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0417 19:34:44.427686  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0417 19:34:44.427786  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0417 19:34:44.439621  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0417 19:34:44.439639  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0417 19:34:44.439655  115894 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0417 19:34:44.439698  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0417 19:34:44.439713  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0417 19:34:44.486633  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0417 19:34:44.486753  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0417 19:34:44.489771  115894 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0417 19:34:44.489814  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0417 19:34:44.489861  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0417 19:34:44.489876  115894 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0417 19:34:44.489906  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0417 19:34:44.606197  115894 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:34:47.135765  115894 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.696039889s)
	I0417 19:34:47.135796  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0417 19:34:47.135823  115894 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0417 19:34:47.135841  115894 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.69610681s)
	I0417 19:34:47.135870  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0417 19:34:47.135871  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0417 19:34:47.135916  115894 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.649146718s)
	I0417 19:34:47.135954  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0417 19:34:47.135981  115894 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.646093338s)
	I0417 19:34:47.135998  115894 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0417 19:34:47.136083  115894 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.529852698s)
	I0417 19:34:47.578376  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0417 19:34:47.578432  115894 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0417 19:34:47.578491  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0417 19:34:48.023929  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0417 19:34:48.023985  115894 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0417 19:34:48.024038  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0417 19:34:48.164616  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0417 19:34:48.164662  115894 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0417 19:34:48.164705  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0417 19:34:49.015329  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0417 19:34:49.015378  115894 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0417 19:34:49.015447  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0417 19:34:49.765862  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0417 19:34:49.765918  115894 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0417 19:34:49.765988  115894 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0417 19:34:51.920707  115894 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.154691213s)
	I0417 19:34:51.920742  115894 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0417 19:34:51.920845  115894 cache_images.go:123] Successfully loaded all cached images
	I0417 19:34:51.920865  115894 cache_images.go:92] duration metric: took 7.961364866s to LoadCachedImages
	I0417 19:34:51.920877  115894 kubeadm.go:928] updating node { 192.168.39.86 8443 v1.24.4 crio true true} ...
	I0417 19:34:51.920987  115894 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-590764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-590764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:34:51.921108  115894 ssh_runner.go:195] Run: crio config
	I0417 19:34:51.964894  115894 cni.go:84] Creating CNI manager for ""
	I0417 19:34:51.964932  115894 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:34:51.964951  115894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:34:51.964976  115894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-590764 NodeName:test-preload-590764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:34:51.965152  115894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-590764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:34:51.965220  115894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0417 19:34:51.975458  115894 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:34:51.975527  115894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:34:51.984662  115894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0417 19:34:52.001653  115894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0417 19:34:52.018385  115894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0417 19:34:52.036058  115894 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I0417 19:34:52.040229  115894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:34:52.052691  115894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:34:52.167848  115894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:34:52.186617  115894 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764 for IP: 192.168.39.86
	I0417 19:34:52.186641  115894 certs.go:194] generating shared ca certs ...
	I0417 19:34:52.186657  115894 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:34:52.186816  115894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:34:52.186854  115894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:34:52.186861  115894 certs.go:256] generating profile certs ...
	I0417 19:34:52.186949  115894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/client.key
	I0417 19:34:52.187013  115894 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/apiserver.key.e5e91e27
	I0417 19:34:52.187053  115894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/proxy-client.key
	I0417 19:34:52.187159  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:34:52.187200  115894 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:34:52.187218  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:34:52.187264  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:34:52.187295  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:34:52.187320  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:34:52.187374  115894 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:34:52.188144  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:34:52.225549  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:34:52.273674  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:34:52.314055  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:34:52.358179  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0417 19:34:52.388647  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0417 19:34:52.415995  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:34:52.440820  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:34:52.465905  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:34:52.490483  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:34:52.515178  115894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:34:52.539740  115894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:34:52.557196  115894 ssh_runner.go:195] Run: openssl version
	I0417 19:34:52.563041  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:34:52.574840  115894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:34:52.579433  115894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:34:52.579508  115894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:34:52.585066  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:34:52.595928  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:34:52.606849  115894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:34:52.611708  115894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:34:52.611766  115894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:34:52.617814  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:34:52.628854  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:34:52.639600  115894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:34:52.644239  115894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:34:52.644293  115894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:34:52.650026  115894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:34:52.660753  115894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:34:52.665290  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:34:52.671381  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:34:52.677168  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:34:52.683320  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:34:52.689220  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:34:52.694909  115894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:34:52.700858  115894 kubeadm.go:391] StartCluster: {Name:test-preload-590764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-590764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:34:52.700944  115894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:34:52.700985  115894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:34:52.737657  115894 cri.go:89] found id: ""
	I0417 19:34:52.737734  115894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0417 19:34:52.748187  115894 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0417 19:34:52.748204  115894 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0417 19:34:52.748209  115894 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0417 19:34:52.748253  115894 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0417 19:34:52.757893  115894 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0417 19:34:52.758333  115894 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-590764" does not appear in /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:34:52.758437  115894 kubeconfig.go:62] /home/jenkins/minikube-integration/18665-75973/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-590764" cluster setting kubeconfig missing "test-preload-590764" context setting]
	I0417 19:34:52.758742  115894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:34:52.759335  115894 kapi.go:59] client config for test-preload-590764: &rest.Config{Host:"https://192.168.39.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0417 19:34:52.759930  115894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0417 19:34:52.769319  115894 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.86
	I0417 19:34:52.769345  115894 kubeadm.go:1154] stopping kube-system containers ...
	I0417 19:34:52.769355  115894 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0417 19:34:52.769404  115894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:34:52.805785  115894 cri.go:89] found id: ""
	I0417 19:34:52.805857  115894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0417 19:34:52.822423  115894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:34:52.832196  115894 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:34:52.832226  115894 kubeadm.go:156] found existing configuration files:
	
	I0417 19:34:52.832285  115894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:34:52.841545  115894 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:34:52.841599  115894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:34:52.851076  115894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:34:52.860641  115894 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:34:52.860703  115894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:34:52.870206  115894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:34:52.879605  115894 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:34:52.879657  115894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:34:52.889283  115894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:34:52.898573  115894 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:34:52.898639  115894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:34:52.908405  115894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:34:52.918106  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:34:53.017765  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:34:53.652104  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:34:53.910896  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:34:53.980026  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:34:54.055174  115894 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:34:54.055273  115894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:34:54.556386  115894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:34:55.056412  115894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:34:55.076225  115894 api_server.go:72] duration metric: took 1.021050956s to wait for apiserver process to appear ...
	I0417 19:34:55.076280  115894 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:34:55.076305  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:34:55.076824  115894 api_server.go:269] stopped: https://192.168.39.86:8443/healthz: Get "https://192.168.39.86:8443/healthz": dial tcp 192.168.39.86:8443: connect: connection refused
	I0417 19:34:55.576554  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:34:59.514390  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0417 19:34:59.514422  115894 api_server.go:103] status: https://192.168.39.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0417 19:34:59.514437  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:34:59.570630  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0417 19:34:59.570671  115894 api_server.go:103] status: https://192.168.39.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0417 19:34:59.576830  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:34:59.594312  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0417 19:34:59.594352  115894 api_server.go:103] status: https://192.168.39.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0417 19:35:00.076994  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:35:00.084975  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0417 19:35:00.085001  115894 api_server.go:103] status: https://192.168.39.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0417 19:35:00.576573  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:35:00.583800  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0417 19:35:00.583844  115894 api_server.go:103] status: https://192.168.39.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0417 19:35:01.076376  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:35:01.084787  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I0417 19:35:01.095631  115894 api_server.go:141] control plane version: v1.24.4
	I0417 19:35:01.095664  115894 api_server.go:131] duration metric: took 6.019375404s to wait for apiserver health ...
	I0417 19:35:01.095674  115894 cni.go:84] Creating CNI manager for ""
	I0417 19:35:01.095681  115894 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:35:01.097473  115894 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0417 19:35:01.099039  115894 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0417 19:35:01.128418  115894 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0417 19:35:01.154716  115894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:35:01.164095  115894 system_pods.go:59] 7 kube-system pods found
	I0417 19:35:01.164134  115894 system_pods.go:61] "coredns-6d4b75cb6d-xzbhq" [225a1ac7-2a92-4cac-8996-def6e30ecca0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0417 19:35:01.164141  115894 system_pods.go:61] "etcd-test-preload-590764" [6c0efe80-a909-4a1d-889f-8e144b866cdf] Running
	I0417 19:35:01.164149  115894 system_pods.go:61] "kube-apiserver-test-preload-590764" [954692f6-cd82-4fba-86d6-1aab79b792fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0417 19:35:01.164153  115894 system_pods.go:61] "kube-controller-manager-test-preload-590764" [1ce3ae1e-13d0-49ed-acdc-9d9501b81590] Running
	I0417 19:35:01.164157  115894 system_pods.go:61] "kube-proxy-dgfb8" [dfd20a08-6164-4a59-a296-11aeb5baf7fd] Running
	I0417 19:35:01.164166  115894 system_pods.go:61] "kube-scheduler-test-preload-590764" [8049fb06-0798-45d3-9fe6-7b1aaee0c008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0417 19:35:01.164170  115894 system_pods.go:61] "storage-provisioner" [b0d3375f-7c93-463b-8462-c18082867a89] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0417 19:35:01.164177  115894 system_pods.go:74] duration metric: took 9.436381ms to wait for pod list to return data ...
	I0417 19:35:01.164187  115894 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:35:01.169871  115894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 19:35:01.169899  115894 node_conditions.go:123] node cpu capacity is 2
	I0417 19:35:01.169913  115894 node_conditions.go:105] duration metric: took 5.719969ms to run NodePressure ...
	I0417 19:35:01.169948  115894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0417 19:35:01.432162  115894 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0417 19:35:01.439304  115894 kubeadm.go:733] kubelet initialised
	I0417 19:35:01.439334  115894 kubeadm.go:734] duration metric: took 7.143659ms waiting for restarted kubelet to initialise ...
	I0417 19:35:01.439345  115894 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:35:01.453425  115894 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:01.462469  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.462501  115894 pod_ready.go:81] duration metric: took 9.040276ms for pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:01.462513  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.462521  115894 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:01.469473  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "etcd-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.469497  115894 pod_ready.go:81] duration metric: took 6.965919ms for pod "etcd-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:01.469505  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "etcd-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.469511  115894 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:01.476215  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "kube-apiserver-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.476235  115894 pod_ready.go:81] duration metric: took 6.713307ms for pod "kube-apiserver-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:01.476242  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "kube-apiserver-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.476252  115894 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:01.558941  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.558969  115894 pod_ready.go:81] duration metric: took 82.707627ms for pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:01.558991  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.558997  115894 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dgfb8" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:01.958161  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "kube-proxy-dgfb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.958195  115894 pod_ready.go:81] duration metric: took 399.187585ms for pod "kube-proxy-dgfb8" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:01.958205  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "kube-proxy-dgfb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:01.958212  115894 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:02.359551  115894 pod_ready.go:97] node "test-preload-590764" hosting pod "kube-scheduler-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:02.359582  115894 pod_ready.go:81] duration metric: took 401.362557ms for pod "kube-scheduler-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	E0417 19:35:02.359592  115894 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-590764" hosting pod "kube-scheduler-test-preload-590764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:02.359620  115894 pod_ready.go:38] duration metric: took 920.26333ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:35:02.359643  115894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 19:35:02.372630  115894 ops.go:34] apiserver oom_adj: -16
	I0417 19:35:02.372662  115894 kubeadm.go:591] duration metric: took 9.624446242s to restartPrimaryControlPlane
	I0417 19:35:02.372675  115894 kubeadm.go:393] duration metric: took 9.671823862s to StartCluster
	I0417 19:35:02.372698  115894 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:35:02.372803  115894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:35:02.373451  115894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:35:02.373669  115894 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:35:02.375382  115894 out.go:177] * Verifying Kubernetes components...
	I0417 19:35:02.373748  115894 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0417 19:35:02.373883  115894 config.go:182] Loaded profile config "test-preload-590764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0417 19:35:02.376544  115894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:35:02.376556  115894 addons.go:69] Setting default-storageclass=true in profile "test-preload-590764"
	I0417 19:35:02.376597  115894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-590764"
	I0417 19:35:02.376547  115894 addons.go:69] Setting storage-provisioner=true in profile "test-preload-590764"
	I0417 19:35:02.376645  115894 addons.go:234] Setting addon storage-provisioner=true in "test-preload-590764"
	W0417 19:35:02.376657  115894 addons.go:243] addon storage-provisioner should already be in state true
	I0417 19:35:02.376686  115894 host.go:66] Checking if "test-preload-590764" exists ...
	I0417 19:35:02.377017  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:35:02.377032  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:35:02.377062  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:35:02.377154  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:35:02.391918  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0417 19:35:02.391918  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0417 19:35:02.392394  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:35:02.392521  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:35:02.392917  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:35:02.392943  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:35:02.393105  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:35:02.393125  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:35:02.393315  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:35:02.393467  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:35:02.393505  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetState
	I0417 19:35:02.393893  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:35:02.393926  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:35:02.395874  115894 kapi.go:59] client config for test-preload-590764: &rest.Config{Host:"https://192.168.39.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/client.crt", KeyFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/profiles/test-preload-590764/client.key", CAFile:"/home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e2a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0417 19:35:02.396189  115894 addons.go:234] Setting addon default-storageclass=true in "test-preload-590764"
	W0417 19:35:02.396212  115894 addons.go:243] addon default-storageclass should already be in state true
	I0417 19:35:02.396242  115894 host.go:66] Checking if "test-preload-590764" exists ...
	I0417 19:35:02.396599  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:35:02.396638  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:35:02.408333  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0417 19:35:02.408809  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:35:02.409304  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:35:02.409329  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:35:02.409693  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:35:02.409894  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetState
	I0417 19:35:02.411263  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44243
	I0417 19:35:02.411534  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:35:02.411655  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:35:02.413452  115894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:35:02.412090  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:35:02.414663  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:35:02.414767  115894 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:35:02.414790  115894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 19:35:02.414809  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:35:02.415016  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:35:02.415631  115894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:35:02.415683  115894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:35:02.417845  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:35:02.418289  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:35:02.418319  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:35:02.418496  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:35:02.418701  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:35:02.418865  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:35:02.419020  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:35:02.436440  115894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0417 19:35:02.436894  115894 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:35:02.437432  115894 main.go:141] libmachine: Using API Version  1
	I0417 19:35:02.437455  115894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:35:02.437759  115894 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:35:02.438040  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetState
	I0417 19:35:02.439680  115894 main.go:141] libmachine: (test-preload-590764) Calling .DriverName
	I0417 19:35:02.439949  115894 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 19:35:02.439964  115894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 19:35:02.439979  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHHostname
	I0417 19:35:02.442702  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:35:02.443124  115894 main.go:141] libmachine: (test-preload-590764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:1d:b3", ip: ""} in network mk-test-preload-590764: {Iface:virbr1 ExpiryTime:2024-04-17 20:34:30 +0000 UTC Type:0 Mac:52:54:00:f8:1d:b3 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:test-preload-590764 Clientid:01:52:54:00:f8:1d:b3}
	I0417 19:35:02.443155  115894 main.go:141] libmachine: (test-preload-590764) DBG | domain test-preload-590764 has defined IP address 192.168.39.86 and MAC address 52:54:00:f8:1d:b3 in network mk-test-preload-590764
	I0417 19:35:02.443326  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHPort
	I0417 19:35:02.443535  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHKeyPath
	I0417 19:35:02.443690  115894 main.go:141] libmachine: (test-preload-590764) Calling .GetSSHUsername
	I0417 19:35:02.443853  115894 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/test-preload-590764/id_rsa Username:docker}
	I0417 19:35:02.554086  115894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:35:02.574053  115894 node_ready.go:35] waiting up to 6m0s for node "test-preload-590764" to be "Ready" ...
	I0417 19:35:02.649886  115894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:35:02.659500  115894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 19:35:03.572229  115894 main.go:141] libmachine: Making call to close driver server
	I0417 19:35:03.572262  115894 main.go:141] libmachine: (test-preload-590764) Calling .Close
	I0417 19:35:03.572588  115894 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:35:03.572609  115894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:35:03.572620  115894 main.go:141] libmachine: Making call to close driver server
	I0417 19:35:03.572629  115894 main.go:141] libmachine: (test-preload-590764) Calling .Close
	I0417 19:35:03.572878  115894 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:35:03.572906  115894 main.go:141] libmachine: (test-preload-590764) DBG | Closing plugin on server side
	I0417 19:35:03.572924  115894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:35:03.595399  115894 main.go:141] libmachine: Making call to close driver server
	I0417 19:35:03.595422  115894 main.go:141] libmachine: (test-preload-590764) Calling .Close
	I0417 19:35:03.595730  115894 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:35:03.595748  115894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:35:03.595757  115894 main.go:141] libmachine: (test-preload-590764) DBG | Closing plugin on server side
	I0417 19:35:03.595760  115894 main.go:141] libmachine: Making call to close driver server
	I0417 19:35:03.595789  115894 main.go:141] libmachine: (test-preload-590764) Calling .Close
	I0417 19:35:03.596024  115894 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:35:03.596044  115894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:35:03.603532  115894 main.go:141] libmachine: Making call to close driver server
	I0417 19:35:03.603558  115894 main.go:141] libmachine: (test-preload-590764) Calling .Close
	I0417 19:35:03.603882  115894 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:35:03.603904  115894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:35:03.603923  115894 main.go:141] libmachine: (test-preload-590764) DBG | Closing plugin on server side
	I0417 19:35:03.605783  115894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0417 19:35:03.607053  115894 addons.go:505] duration metric: took 1.233322439s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0417 19:35:04.578891  115894 node_ready.go:53] node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:07.078551  115894 node_ready.go:53] node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:09.578112  115894 node_ready.go:53] node "test-preload-590764" has status "Ready":"False"
	I0417 19:35:10.077863  115894 node_ready.go:49] node "test-preload-590764" has status "Ready":"True"
	I0417 19:35:10.077886  115894 node_ready.go:38] duration metric: took 7.503798011s for node "test-preload-590764" to be "Ready" ...
	I0417 19:35:10.077903  115894 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:35:10.082937  115894 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.087847  115894 pod_ready.go:92] pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.087868  115894 pod_ready.go:81] duration metric: took 4.907882ms for pod "coredns-6d4b75cb6d-xzbhq" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.087876  115894 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.092556  115894 pod_ready.go:92] pod "etcd-test-preload-590764" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.092575  115894 pod_ready.go:81] duration metric: took 4.693344ms for pod "etcd-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.092583  115894 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.097938  115894 pod_ready.go:92] pod "kube-apiserver-test-preload-590764" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.097956  115894 pod_ready.go:81] duration metric: took 5.366804ms for pod "kube-apiserver-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.097963  115894 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.101875  115894 pod_ready.go:92] pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.101891  115894 pod_ready.go:81] duration metric: took 3.922684ms for pod "kube-controller-manager-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.101899  115894 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dgfb8" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.478116  115894 pod_ready.go:92] pod "kube-proxy-dgfb8" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.478142  115894 pod_ready.go:81] duration metric: took 376.238024ms for pod "kube-proxy-dgfb8" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.478152  115894 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.878467  115894 pod_ready.go:92] pod "kube-scheduler-test-preload-590764" in "kube-system" namespace has status "Ready":"True"
	I0417 19:35:10.878497  115894 pod_ready.go:81] duration metric: took 400.339154ms for pod "kube-scheduler-test-preload-590764" in "kube-system" namespace to be "Ready" ...
	I0417 19:35:10.878508  115894 pod_ready.go:38] duration metric: took 800.595898ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:35:10.878532  115894 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:35:10.878591  115894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:35:10.895626  115894 api_server.go:72] duration metric: took 8.521925023s to wait for apiserver process to appear ...
	I0417 19:35:10.895672  115894 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:35:10.895693  115894 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0417 19:35:10.900871  115894 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I0417 19:35:10.901721  115894 api_server.go:141] control plane version: v1.24.4
	I0417 19:35:10.901744  115894 api_server.go:131] duration metric: took 6.063356ms to wait for apiserver health ...
	I0417 19:35:10.901753  115894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:35:11.081275  115894 system_pods.go:59] 7 kube-system pods found
	I0417 19:35:11.081312  115894 system_pods.go:61] "coredns-6d4b75cb6d-xzbhq" [225a1ac7-2a92-4cac-8996-def6e30ecca0] Running
	I0417 19:35:11.081321  115894 system_pods.go:61] "etcd-test-preload-590764" [6c0efe80-a909-4a1d-889f-8e144b866cdf] Running
	I0417 19:35:11.081326  115894 system_pods.go:61] "kube-apiserver-test-preload-590764" [954692f6-cd82-4fba-86d6-1aab79b792fb] Running
	I0417 19:35:11.081332  115894 system_pods.go:61] "kube-controller-manager-test-preload-590764" [1ce3ae1e-13d0-49ed-acdc-9d9501b81590] Running
	I0417 19:35:11.081336  115894 system_pods.go:61] "kube-proxy-dgfb8" [dfd20a08-6164-4a59-a296-11aeb5baf7fd] Running
	I0417 19:35:11.081340  115894 system_pods.go:61] "kube-scheduler-test-preload-590764" [8049fb06-0798-45d3-9fe6-7b1aaee0c008] Running
	I0417 19:35:11.081345  115894 system_pods.go:61] "storage-provisioner" [b0d3375f-7c93-463b-8462-c18082867a89] Running
	I0417 19:35:11.081352  115894 system_pods.go:74] duration metric: took 179.592059ms to wait for pod list to return data ...
	I0417 19:35:11.081361  115894 default_sa.go:34] waiting for default service account to be created ...
	I0417 19:35:11.278019  115894 default_sa.go:45] found service account: "default"
	I0417 19:35:11.278055  115894 default_sa.go:55] duration metric: took 196.684663ms for default service account to be created ...
	I0417 19:35:11.278069  115894 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 19:35:11.482529  115894 system_pods.go:86] 7 kube-system pods found
	I0417 19:35:11.482567  115894 system_pods.go:89] "coredns-6d4b75cb6d-xzbhq" [225a1ac7-2a92-4cac-8996-def6e30ecca0] Running
	I0417 19:35:11.482575  115894 system_pods.go:89] "etcd-test-preload-590764" [6c0efe80-a909-4a1d-889f-8e144b866cdf] Running
	I0417 19:35:11.482582  115894 system_pods.go:89] "kube-apiserver-test-preload-590764" [954692f6-cd82-4fba-86d6-1aab79b792fb] Running
	I0417 19:35:11.482588  115894 system_pods.go:89] "kube-controller-manager-test-preload-590764" [1ce3ae1e-13d0-49ed-acdc-9d9501b81590] Running
	I0417 19:35:11.482594  115894 system_pods.go:89] "kube-proxy-dgfb8" [dfd20a08-6164-4a59-a296-11aeb5baf7fd] Running
	I0417 19:35:11.482598  115894 system_pods.go:89] "kube-scheduler-test-preload-590764" [8049fb06-0798-45d3-9fe6-7b1aaee0c008] Running
	I0417 19:35:11.482603  115894 system_pods.go:89] "storage-provisioner" [b0d3375f-7c93-463b-8462-c18082867a89] Running
	I0417 19:35:11.482612  115894 system_pods.go:126] duration metric: took 204.536219ms to wait for k8s-apps to be running ...
	I0417 19:35:11.482621  115894 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 19:35:11.482674  115894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:35:11.498955  115894 system_svc.go:56] duration metric: took 16.326735ms WaitForService to wait for kubelet
	I0417 19:35:11.498991  115894 kubeadm.go:576] duration metric: took 9.125293565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:35:11.499016  115894 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:35:11.679859  115894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 19:35:11.679891  115894 node_conditions.go:123] node cpu capacity is 2
	I0417 19:35:11.679906  115894 node_conditions.go:105] duration metric: took 180.884077ms to run NodePressure ...
	I0417 19:35:11.679922  115894 start.go:240] waiting for startup goroutines ...
	I0417 19:35:11.679930  115894 start.go:245] waiting for cluster config update ...
	I0417 19:35:11.679943  115894 start.go:254] writing updated cluster config ...
	I0417 19:35:11.680262  115894 ssh_runner.go:195] Run: rm -f paused
	I0417 19:35:11.727182  115894 start.go:600] kubectl: 1.29.4, cluster: 1.24.4 (minor skew: 5)
	I0417 19:35:11.729209  115894 out.go:177] 
	W0417 19:35:11.730622  115894 out.go:239] ! /usr/local/bin/kubectl is version 1.29.4, which may have incompatibilities with Kubernetes 1.24.4.
	I0417 19:35:11.732236  115894 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0417 19:35:11.733841  115894 out.go:177] * Done! kubectl is now configured to use "test-preload-590764" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.734498578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382512734474153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=198c7fad-b152-4702-ac59-e6daad0516ce name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.735268182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb4fd8f2-07d4-4ba5-b3f2-9054bdc93f9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.735320616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb4fd8f2-07d4-4ba5-b3f2-9054bdc93f9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.735512493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff9d79dbfad3d938ec58ed93f3696167c1a1ce08fbe90820e02fdcee1e7e903e,PodSandboxId:e1f9544ea4bf78304f6bc0892862354fd2603e9aabd24524f8af3a64482b962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713382508328427092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xzbhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225a1ac7-2a92-4cac-8996-def6e30ecca0,},Annotations:map[string]string{io.kubernetes.container.hash: f36e9750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec95550eb454db509c98d268b7c40131dabf9c3ca5e996349d9d47d086da3479,PodSandboxId:6b3fb88504b3a166a26380e9928a03b183d8fea567dd103ec45919a09c1c7aaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713382501029464969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dgfb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dfd20a08-6164-4a59-a296-11aeb5baf7fd,},Annotations:map[string]string{io.kubernetes.container.hash: 37c14d33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f40685a3647f69998fc6c0ab56eca912a0adfad72a64e4f08a0478a5a7ce43f,PodSandboxId:d6ff2e76e8525fab4ee9af959a940ddefccbe21df39f5ae6b53749ad042fb64e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713382500741774292,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0
d3375f-7c93-463b-8462-c18082867a89,},Annotations:map[string]string{io.kubernetes.container.hash: bbeb9a32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe79c5a145d3932eadc07987f8e042541ccc55044523e79276b3e1cdf99647b,PodSandboxId:bf01db178192abf224bb9453acd6981270d5daec816127152ae303e7e423a294,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713382494836682846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d234604c9
414af23a20345e51d837ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8cd2c66c6f6402265484ad902dfe293b87d846321a5c9265f93c32a92c96c8,PodSandboxId:1639fcfa3ce7169b3096e22984ad1bdfc41a93431f4d83bbc1aa99c55eea744d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713382494776060164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 30899c236a1cf49e0c4768045691bd4d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8770e3708fd3821ff4f01a1f62322936f917c9922953ce34623ec5a76ffd820,PodSandboxId:9417e194997a79fca08232f88d9714ab4c1bfafe327c1168ddcd9e208f408857,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713382494766396305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c
2dcbd5cb250b529a5cb8aaef9851,},Annotations:map[string]string{io.kubernetes.container.hash: a81cd497,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ec18f52249aefed13d8cf828454626404c09a73f1602fbde30a00f10da03cc,PodSandboxId:8b40f8d91c4e904ba4e758d589a3aac8f30398770f64f430f3ae425b41d27e93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713382494752055349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3624ed694e6b503c633deed30583fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 75fe9c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb4fd8f2-07d4-4ba5-b3f2-9054bdc93f9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.771455040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1505f050-8765-4a72-a293-823e05d6953c name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.771549744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1505f050-8765-4a72-a293-823e05d6953c name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.772563922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ef46e6b-bb0b-4e14-a27d-f23e2ab83541 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.774000070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382512773974112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ef46e6b-bb0b-4e14-a27d-f23e2ab83541 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.774647570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f836a86f-6ee6-4822-84bb-2cb33787def8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.774840100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f836a86f-6ee6-4822-84bb-2cb33787def8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.775302048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff9d79dbfad3d938ec58ed93f3696167c1a1ce08fbe90820e02fdcee1e7e903e,PodSandboxId:e1f9544ea4bf78304f6bc0892862354fd2603e9aabd24524f8af3a64482b962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713382508328427092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xzbhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225a1ac7-2a92-4cac-8996-def6e30ecca0,},Annotations:map[string]string{io.kubernetes.container.hash: f36e9750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec95550eb454db509c98d268b7c40131dabf9c3ca5e996349d9d47d086da3479,PodSandboxId:6b3fb88504b3a166a26380e9928a03b183d8fea567dd103ec45919a09c1c7aaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713382501029464969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dgfb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dfd20a08-6164-4a59-a296-11aeb5baf7fd,},Annotations:map[string]string{io.kubernetes.container.hash: 37c14d33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f40685a3647f69998fc6c0ab56eca912a0adfad72a64e4f08a0478a5a7ce43f,PodSandboxId:d6ff2e76e8525fab4ee9af959a940ddefccbe21df39f5ae6b53749ad042fb64e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713382500741774292,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0
d3375f-7c93-463b-8462-c18082867a89,},Annotations:map[string]string{io.kubernetes.container.hash: bbeb9a32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe79c5a145d3932eadc07987f8e042541ccc55044523e79276b3e1cdf99647b,PodSandboxId:bf01db178192abf224bb9453acd6981270d5daec816127152ae303e7e423a294,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713382494836682846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d234604c9
414af23a20345e51d837ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8cd2c66c6f6402265484ad902dfe293b87d846321a5c9265f93c32a92c96c8,PodSandboxId:1639fcfa3ce7169b3096e22984ad1bdfc41a93431f4d83bbc1aa99c55eea744d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713382494776060164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 30899c236a1cf49e0c4768045691bd4d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8770e3708fd3821ff4f01a1f62322936f917c9922953ce34623ec5a76ffd820,PodSandboxId:9417e194997a79fca08232f88d9714ab4c1bfafe327c1168ddcd9e208f408857,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713382494766396305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c
2dcbd5cb250b529a5cb8aaef9851,},Annotations:map[string]string{io.kubernetes.container.hash: a81cd497,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ec18f52249aefed13d8cf828454626404c09a73f1602fbde30a00f10da03cc,PodSandboxId:8b40f8d91c4e904ba4e758d589a3aac8f30398770f64f430f3ae425b41d27e93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713382494752055349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3624ed694e6b503c633deed30583fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 75fe9c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f836a86f-6ee6-4822-84bb-2cb33787def8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.814975051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b0856b0-c219-4a6f-a625-d7f1b034397b name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.815115825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b0856b0-c219-4a6f-a625-d7f1b034397b name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.816307260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92a7f437-4bfb-4923-b415-7a55cf960f84 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.816762527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382512816741822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92a7f437-4bfb-4923-b415-7a55cf960f84 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.817834937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ecdbd75-dff4-4308-9e55-7eec0385e1b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.818050529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ecdbd75-dff4-4308-9e55-7eec0385e1b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.818451859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff9d79dbfad3d938ec58ed93f3696167c1a1ce08fbe90820e02fdcee1e7e903e,PodSandboxId:e1f9544ea4bf78304f6bc0892862354fd2603e9aabd24524f8af3a64482b962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713382508328427092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xzbhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225a1ac7-2a92-4cac-8996-def6e30ecca0,},Annotations:map[string]string{io.kubernetes.container.hash: f36e9750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec95550eb454db509c98d268b7c40131dabf9c3ca5e996349d9d47d086da3479,PodSandboxId:6b3fb88504b3a166a26380e9928a03b183d8fea567dd103ec45919a09c1c7aaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713382501029464969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dgfb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dfd20a08-6164-4a59-a296-11aeb5baf7fd,},Annotations:map[string]string{io.kubernetes.container.hash: 37c14d33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f40685a3647f69998fc6c0ab56eca912a0adfad72a64e4f08a0478a5a7ce43f,PodSandboxId:d6ff2e76e8525fab4ee9af959a940ddefccbe21df39f5ae6b53749ad042fb64e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713382500741774292,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0
d3375f-7c93-463b-8462-c18082867a89,},Annotations:map[string]string{io.kubernetes.container.hash: bbeb9a32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe79c5a145d3932eadc07987f8e042541ccc55044523e79276b3e1cdf99647b,PodSandboxId:bf01db178192abf224bb9453acd6981270d5daec816127152ae303e7e423a294,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713382494836682846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d234604c9
414af23a20345e51d837ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8cd2c66c6f6402265484ad902dfe293b87d846321a5c9265f93c32a92c96c8,PodSandboxId:1639fcfa3ce7169b3096e22984ad1bdfc41a93431f4d83bbc1aa99c55eea744d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713382494776060164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 30899c236a1cf49e0c4768045691bd4d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8770e3708fd3821ff4f01a1f62322936f917c9922953ce34623ec5a76ffd820,PodSandboxId:9417e194997a79fca08232f88d9714ab4c1bfafe327c1168ddcd9e208f408857,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713382494766396305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c
2dcbd5cb250b529a5cb8aaef9851,},Annotations:map[string]string{io.kubernetes.container.hash: a81cd497,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ec18f52249aefed13d8cf828454626404c09a73f1602fbde30a00f10da03cc,PodSandboxId:8b40f8d91c4e904ba4e758d589a3aac8f30398770f64f430f3ae425b41d27e93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713382494752055349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3624ed694e6b503c633deed30583fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 75fe9c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ecdbd75-dff4-4308-9e55-7eec0385e1b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.858968732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ba49534-60b0-4809-a1f0-519b024e58fc name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.859044516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ba49534-60b0-4809-a1f0-519b024e58fc name=/runtime.v1.RuntimeService/Version
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.861010756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23431e0d-3ca3-4400-8620-e0490af102a9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.861508374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713382512861484702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23431e0d-3ca3-4400-8620-e0490af102a9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.862250543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcd06ed7-56cc-4a71-a597-241f6ce9d3ae name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.862357105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcd06ed7-56cc-4a71-a597-241f6ce9d3ae name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:35:12 test-preload-590764 crio[684]: time="2024-04-17 19:35:12.862516849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff9d79dbfad3d938ec58ed93f3696167c1a1ce08fbe90820e02fdcee1e7e903e,PodSandboxId:e1f9544ea4bf78304f6bc0892862354fd2603e9aabd24524f8af3a64482b962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713382508328427092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xzbhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225a1ac7-2a92-4cac-8996-def6e30ecca0,},Annotations:map[string]string{io.kubernetes.container.hash: f36e9750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec95550eb454db509c98d268b7c40131dabf9c3ca5e996349d9d47d086da3479,PodSandboxId:6b3fb88504b3a166a26380e9928a03b183d8fea567dd103ec45919a09c1c7aaa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713382501029464969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dgfb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dfd20a08-6164-4a59-a296-11aeb5baf7fd,},Annotations:map[string]string{io.kubernetes.container.hash: 37c14d33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f40685a3647f69998fc6c0ab56eca912a0adfad72a64e4f08a0478a5a7ce43f,PodSandboxId:d6ff2e76e8525fab4ee9af959a940ddefccbe21df39f5ae6b53749ad042fb64e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713382500741774292,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0
d3375f-7c93-463b-8462-c18082867a89,},Annotations:map[string]string{io.kubernetes.container.hash: bbeb9a32,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe79c5a145d3932eadc07987f8e042541ccc55044523e79276b3e1cdf99647b,PodSandboxId:bf01db178192abf224bb9453acd6981270d5daec816127152ae303e7e423a294,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713382494836682846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d234604c9
414af23a20345e51d837ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8cd2c66c6f6402265484ad902dfe293b87d846321a5c9265f93c32a92c96c8,PodSandboxId:1639fcfa3ce7169b3096e22984ad1bdfc41a93431f4d83bbc1aa99c55eea744d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713382494776060164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 30899c236a1cf49e0c4768045691bd4d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8770e3708fd3821ff4f01a1f62322936f917c9922953ce34623ec5a76ffd820,PodSandboxId:9417e194997a79fca08232f88d9714ab4c1bfafe327c1168ddcd9e208f408857,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713382494766396305,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c
2dcbd5cb250b529a5cb8aaef9851,},Annotations:map[string]string{io.kubernetes.container.hash: a81cd497,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ec18f52249aefed13d8cf828454626404c09a73f1602fbde30a00f10da03cc,PodSandboxId:8b40f8d91c4e904ba4e758d589a3aac8f30398770f64f430f3ae425b41d27e93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713382494752055349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea3624ed694e6b503c633deed30583fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 75fe9c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcd06ed7-56cc-4a71-a597-241f6ce9d3ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff9d79dbfad3d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   e1f9544ea4bf7       coredns-6d4b75cb6d-xzbhq
	ec95550eb454d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   6b3fb88504b3a       kube-proxy-dgfb8
	0f40685a3647f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   d6ff2e76e8525       storage-provisioner
	ebe79c5a145d3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   bf01db178192a       kube-scheduler-test-preload-590764
	fa8cd2c66c6f6       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   1639fcfa3ce71       kube-controller-manager-test-preload-590764
	b8770e3708fd3       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   9417e194997a7       kube-apiserver-test-preload-590764
	70ec18f52249a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   8b40f8d91c4e9       etcd-test-preload-590764
	
	
	==> coredns [ff9d79dbfad3d938ec58ed93f3696167c1a1ce08fbe90820e02fdcee1e7e903e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:43381 - 23274 "HINFO IN 6363056113179841457.6576963308195860359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020464336s
	
	
	==> describe nodes <==
	Name:               test-preload-590764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-590764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=test-preload-590764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_33_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:33:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-590764
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:35:09 +0000   Wed, 17 Apr 2024 19:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:35:09 +0000   Wed, 17 Apr 2024 19:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:35:09 +0000   Wed, 17 Apr 2024 19:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:35:09 +0000   Wed, 17 Apr 2024 19:35:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    test-preload-590764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1977fb15d89e44608b062ec3af7fa272
	  System UUID:                1977fb15-d89e-4460-8b06-2ec3af7fa272
	  Boot ID:                    c3881ef6-b672-4789-aeb7-d8f3887e08ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xzbhq                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     71s
	  kube-system                 etcd-test-preload-590764                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-test-preload-590764             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-test-preload-590764    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-dgfb8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-test-preload-590764             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s                kubelet          Node test-preload-590764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet          Node test-preload-590764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet          Node test-preload-590764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                73s                kubelet          Node test-preload-590764 status is now: NodeReady
	  Normal  RegisteredNode           72s                node-controller  Node test-preload-590764 event: Registered Node test-preload-590764 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-590764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-590764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-590764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-590764 event: Registered Node test-preload-590764 in Controller
	
	
	==> dmesg <==
	[Apr17 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054061] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041377] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.538270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.724435] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.815733] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060294] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065897] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.178398] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.110980] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.278385] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[ +12.673836] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.064867] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.673209] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +6.869261] kauditd_printk_skb: 105 callbacks suppressed
	[Apr17 19:35] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +5.634494] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [70ec18f52249aefed13d8cf828454626404c09a73f1602fbde30a00f10da03cc] <==
	{"level":"info","ts":"2024-04-17T19:34:55.286Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"5e65f7c667250dae","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-17T19:34:55.290Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:34:55.290Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5e65f7c667250dae","initial-advertise-peer-urls":["https://192.168.39.86:2380"],"listen-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.86:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-17T19:34:55.290Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae switched to configuration voters=(6802115243719069102)"}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","added-peer-id":"5e65f7c667250dae","added-peer-peer-urls":["https://192.168.39.86:2380"]}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:34:55.292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgPreVoteResp from 5e65f7c667250dae at term 2"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became candidate at term 3"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgVoteResp from 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became leader at term 3"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5e65f7c667250dae elected leader 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"5e65f7c667250dae","local-member-attributes":"{Name:test-preload-590764 ClientURLs:[https://192.168.39.86:2379]}","request-path":"/0/members/5e65f7c667250dae/attributes","cluster-id":"1e2108b476944475","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:34:57.039Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:34:57.040Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:34:57.041Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.86:2379"}
	{"level":"info","ts":"2024-04-17T19:34:57.042Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T19:34:57.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:34:57.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:35:13 up 0 min,  0 users,  load average: 0.60, 0.17, 0.06
	Linux test-preload-590764 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b8770e3708fd3821ff4f01a1f62322936f917c9922953ce34623ec5a76ffd820] <==
	I0417 19:34:59.473034       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0417 19:34:59.473134       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0417 19:34:59.397684       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0417 19:34:59.473235       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0417 19:34:59.473763       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0417 19:34:59.473800       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0417 19:34:59.546366       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0417 19:34:59.554390       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0417 19:34:59.572732       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0417 19:34:59.573348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:34:59.581799       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0417 19:34:59.595514       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0417 19:34:59.598392       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0417 19:34:59.602369       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:34:59.622250       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:35:00.074168       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0417 19:35:00.402364       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0417 19:35:01.314789       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0417 19:35:01.337722       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0417 19:35:01.380868       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0417 19:35:01.405490       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:35:01.412289       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 19:35:01.435601       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0417 19:35:12.597488       1 controller.go:611] quota admission added evaluator for: endpoints
	I0417 19:35:12.639832       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fa8cd2c66c6f6402265484ad902dfe293b87d846321a5c9265f93c32a92c96c8] <==
	I0417 19:35:12.350400       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0417 19:35:12.350572       1 shared_informer.go:262] Caches are synced for cronjob
	I0417 19:35:12.353326       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0417 19:35:12.354156       1 shared_informer.go:262] Caches are synced for node
	I0417 19:35:12.354226       1 range_allocator.go:173] Starting range CIDR allocator
	I0417 19:35:12.354234       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0417 19:35:12.354242       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0417 19:35:12.354297       1 shared_informer.go:262] Caches are synced for job
	I0417 19:35:12.358678       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0417 19:35:12.362294       1 shared_informer.go:262] Caches are synced for PVC protection
	I0417 19:35:12.379830       1 shared_informer.go:262] Caches are synced for GC
	I0417 19:35:12.381864       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0417 19:35:12.387191       1 shared_informer.go:262] Caches are synced for persistent volume
	I0417 19:35:12.438035       1 shared_informer.go:262] Caches are synced for taint
	I0417 19:35:12.438235       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0417 19:35:12.438316       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-590764. Assuming now as a timestamp.
	I0417 19:35:12.438377       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0417 19:35:12.438695       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0417 19:35:12.438901       1 event.go:294] "Event occurred" object="test-preload-590764" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-590764 event: Registered Node test-preload-590764 in Controller"
	I0417 19:35:12.447358       1 shared_informer.go:262] Caches are synced for daemon sets
	I0417 19:35:12.512052       1 shared_informer.go:262] Caches are synced for resource quota
	I0417 19:35:12.542852       1 shared_informer.go:262] Caches are synced for resource quota
	I0417 19:35:12.943214       1 shared_informer.go:262] Caches are synced for garbage collector
	I0417 19:35:12.943253       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0417 19:35:12.983040       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [ec95550eb454db509c98d268b7c40131dabf9c3ca5e996349d9d47d086da3479] <==
	I0417 19:35:01.327529       1 node.go:163] Successfully retrieved node IP: 192.168.39.86
	I0417 19:35:01.327804       1 server_others.go:138] "Detected node IP" address="192.168.39.86"
	I0417 19:35:01.328324       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0417 19:35:01.399769       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0417 19:35:01.399863       1 server_others.go:206] "Using iptables Proxier"
	I0417 19:35:01.401411       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0417 19:35:01.412815       1 server.go:661] "Version info" version="v1.24.4"
	I0417 19:35:01.412853       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:35:01.416939       1 config.go:317] "Starting service config controller"
	I0417 19:35:01.420003       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0417 19:35:01.420603       1 config.go:226] "Starting endpoint slice config controller"
	I0417 19:35:01.420634       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0417 19:35:01.421624       1 config.go:444] "Starting node config controller"
	I0417 19:35:01.421662       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0417 19:35:01.521331       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0417 19:35:01.521454       1 shared_informer.go:262] Caches are synced for service config
	I0417 19:35:01.526207       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ebe79c5a145d3932eadc07987f8e042541ccc55044523e79276b3e1cdf99647b] <==
	I0417 19:34:55.697181       1 serving.go:348] Generated self-signed cert in-memory
	I0417 19:34:59.592213       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0417 19:34:59.592325       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:34:59.613016       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0417 19:34:59.613170       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0417 19:34:59.613764       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0417 19:34:59.613192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0417 19:34:59.613997       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:34:59.613206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0417 19:34:59.616414       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0417 19:34:59.613215       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:34:59.714642       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0417 19:34:59.714691       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:34:59.721999       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 17 19:34:59 test-preload-590764 kubelet[1078]: I0417 19:34:59.600953    1078 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-590764"
	Apr 17 19:34:59 test-preload-590764 kubelet[1078]: I0417 19:34:59.604852    1078 setters.go:532] "Node became not ready" node="test-preload-590764" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-17 19:34:59.604727092 +0000 UTC m=+5.702383273 LastTransitionTime:2024-04-17 19:34:59.604727092 +0000 UTC m=+5.702383273 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.028379    1078 apiserver.go:52] "Watching apiserver"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.033294    1078 topology_manager.go:200] "Topology Admit Handler"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.033459    1078 topology_manager.go:200] "Topology Admit Handler"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.033546    1078 topology_manager.go:200] "Topology Admit Handler"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: E0417 19:35:00.036157    1078 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xzbhq" podUID=225a1ac7-2a92-4cac-8996-def6e30ecca0
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103128    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfw2\" (UniqueName: \"kubernetes.io/projected/225a1ac7-2a92-4cac-8996-def6e30ecca0-kube-api-access-prfw2\") pod \"coredns-6d4b75cb6d-xzbhq\" (UID: \"225a1ac7-2a92-4cac-8996-def6e30ecca0\") " pod="kube-system/coredns-6d4b75cb6d-xzbhq"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103189    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfd20a08-6164-4a59-a296-11aeb5baf7fd-kube-proxy\") pod \"kube-proxy-dgfb8\" (UID: \"dfd20a08-6164-4a59-a296-11aeb5baf7fd\") " pod="kube-system/kube-proxy-dgfb8"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103210    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfd20a08-6164-4a59-a296-11aeb5baf7fd-lib-modules\") pod \"kube-proxy-dgfb8\" (UID: \"dfd20a08-6164-4a59-a296-11aeb5baf7fd\") " pod="kube-system/kube-proxy-dgfb8"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103265    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b0d3375f-7c93-463b-8462-c18082867a89-tmp\") pod \"storage-provisioner\" (UID: \"b0d3375f-7c93-463b-8462-c18082867a89\") " pod="kube-system/storage-provisioner"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103302    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume\") pod \"coredns-6d4b75cb6d-xzbhq\" (UID: \"225a1ac7-2a92-4cac-8996-def6e30ecca0\") " pod="kube-system/coredns-6d4b75cb6d-xzbhq"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103327    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfd20a08-6164-4a59-a296-11aeb5baf7fd-xtables-lock\") pod \"kube-proxy-dgfb8\" (UID: \"dfd20a08-6164-4a59-a296-11aeb5baf7fd\") " pod="kube-system/kube-proxy-dgfb8"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103346    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhx4\" (UniqueName: \"kubernetes.io/projected/dfd20a08-6164-4a59-a296-11aeb5baf7fd-kube-api-access-klhx4\") pod \"kube-proxy-dgfb8\" (UID: \"dfd20a08-6164-4a59-a296-11aeb5baf7fd\") " pod="kube-system/kube-proxy-dgfb8"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103363    1078 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8lpz\" (UniqueName: \"kubernetes.io/projected/b0d3375f-7c93-463b-8462-c18082867a89-kube-api-access-l8lpz\") pod \"storage-provisioner\" (UID: \"b0d3375f-7c93-463b-8462-c18082867a89\") " pod="kube-system/storage-provisioner"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: I0417 19:35:00.103374    1078 reconciler.go:159] "Reconciler: start to sync state"
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: E0417 19:35:00.206424    1078 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: E0417 19:35:00.206569    1078 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume podName:225a1ac7-2a92-4cac-8996-def6e30ecca0 nodeName:}" failed. No retries permitted until 2024-04-17 19:35:00.706531616 +0000 UTC m=+6.804187799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume") pod "coredns-6d4b75cb6d-xzbhq" (UID: "225a1ac7-2a92-4cac-8996-def6e30ecca0") : object "kube-system"/"coredns" not registered
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: E0417 19:35:00.710385    1078 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 17 19:35:00 test-preload-590764 kubelet[1078]: E0417 19:35:00.710461    1078 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume podName:225a1ac7-2a92-4cac-8996-def6e30ecca0 nodeName:}" failed. No retries permitted until 2024-04-17 19:35:01.71044518 +0000 UTC m=+7.808101348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume") pod "coredns-6d4b75cb6d-xzbhq" (UID: "225a1ac7-2a92-4cac-8996-def6e30ecca0") : object "kube-system"/"coredns" not registered
	Apr 17 19:35:01 test-preload-590764 kubelet[1078]: E0417 19:35:01.721420    1078 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 17 19:35:01 test-preload-590764 kubelet[1078]: E0417 19:35:01.721549    1078 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume podName:225a1ac7-2a92-4cac-8996-def6e30ecca0 nodeName:}" failed. No retries permitted until 2024-04-17 19:35:03.721527262 +0000 UTC m=+9.819183432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume") pod "coredns-6d4b75cb6d-xzbhq" (UID: "225a1ac7-2a92-4cac-8996-def6e30ecca0") : object "kube-system"/"coredns" not registered
	Apr 17 19:35:02 test-preload-590764 kubelet[1078]: E0417 19:35:02.144386    1078 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xzbhq" podUID=225a1ac7-2a92-4cac-8996-def6e30ecca0
	Apr 17 19:35:03 test-preload-590764 kubelet[1078]: E0417 19:35:03.739828    1078 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 17 19:35:03 test-preload-590764 kubelet[1078]: E0417 19:35:03.739999    1078 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume podName:225a1ac7-2a92-4cac-8996-def6e30ecca0 nodeName:}" failed. No retries permitted until 2024-04-17 19:35:07.739965115 +0000 UTC m=+13.837621303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/225a1ac7-2a92-4cac-8996-def6e30ecca0-config-volume") pod "coredns-6d4b75cb6d-xzbhq" (UID: "225a1ac7-2a92-4cac-8996-def6e30ecca0") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [0f40685a3647f69998fc6c0ab56eca912a0adfad72a64e4f08a0478a5a7ce43f] <==
	I0417 19:35:00.812002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-590764 -n test-preload-590764
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-590764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-590764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-590764
--- FAIL: TestPreload (168.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (440.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m58.939130382s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-365550] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-365550" primary control-plane node in "kubernetes-upgrade-365550" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:40:04.328636  121874 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:40:04.328911  121874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:40:04.328923  121874 out.go:304] Setting ErrFile to fd 2...
	I0417 19:40:04.328927  121874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:40:04.329140  121874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:40:04.329673  121874 out.go:298] Setting JSON to false
	I0417 19:40:04.330567  121874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12152,"bootTime":1713370652,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:40:04.330627  121874 start.go:139] virtualization: kvm guest
	I0417 19:40:04.332887  121874 out.go:177] * [kubernetes-upgrade-365550] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:40:04.334166  121874 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:40:04.335365  121874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:40:04.334237  121874 notify.go:220] Checking for updates...
	I0417 19:40:04.336714  121874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:40:04.338006  121874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:40:04.339302  121874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:40:04.340528  121874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:40:04.342119  121874 config.go:182] Loaded profile config "NoKubernetes-716489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0417 19:40:04.342201  121874 config.go:182] Loaded profile config "cert-expiration-362714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:40:04.342290  121874 config.go:182] Loaded profile config "running-upgrade-419258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0417 19:40:04.342372  121874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:40:04.377992  121874 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 19:40:04.379194  121874 start.go:297] selected driver: kvm2
	I0417 19:40:04.379210  121874 start.go:901] validating driver "kvm2" against <nil>
	I0417 19:40:04.379225  121874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:40:04.380238  121874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:40:04.380326  121874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:40:04.396105  121874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:40:04.396171  121874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:40:04.396452  121874 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 19:40:04.396531  121874 cni.go:84] Creating CNI manager for ""
	I0417 19:40:04.396547  121874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:40:04.396559  121874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 19:40:04.396639  121874 start.go:340] cluster config:
	{Name:kubernetes-upgrade-365550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-365550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:40:04.396787  121874 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:40:04.398611  121874 out.go:177] * Starting "kubernetes-upgrade-365550" primary control-plane node in "kubernetes-upgrade-365550" cluster
	I0417 19:40:04.399902  121874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 19:40:04.399931  121874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0417 19:40:04.399939  121874 cache.go:56] Caching tarball of preloaded images
	I0417 19:40:04.400016  121874 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:40:04.400027  121874 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0417 19:40:04.400103  121874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/config.json ...
	I0417 19:40:04.400120  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/config.json: {Name:mkd632fd22592b37b457ebc3532614f1f27f168d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:40:04.400249  121874 start.go:360] acquireMachinesLock for kubernetes-upgrade-365550: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:40:32.861797  121874 start.go:364] duration metric: took 28.461519145s to acquireMachinesLock for "kubernetes-upgrade-365550"
	I0417 19:40:32.861854  121874 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-365550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-365550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:40:32.861977  121874 start.go:125] createHost starting for "" (driver="kvm2")
	I0417 19:40:32.864455  121874 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0417 19:40:32.864665  121874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:40:32.864719  121874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:40:32.881424  121874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0417 19:40:32.881878  121874 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:40:32.882440  121874 main.go:141] libmachine: Using API Version  1
	I0417 19:40:32.882489  121874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:40:32.882911  121874 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:40:32.883126  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetMachineName
	I0417 19:40:32.883267  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:32.883421  121874 start.go:159] libmachine.API.Create for "kubernetes-upgrade-365550" (driver="kvm2")
	I0417 19:40:32.883452  121874 client.go:168] LocalClient.Create starting
	I0417 19:40:32.883492  121874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem
	I0417 19:40:32.883534  121874 main.go:141] libmachine: Decoding PEM data...
	I0417 19:40:32.883558  121874 main.go:141] libmachine: Parsing certificate...
	I0417 19:40:32.883638  121874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem
	I0417 19:40:32.883665  121874 main.go:141] libmachine: Decoding PEM data...
	I0417 19:40:32.883684  121874 main.go:141] libmachine: Parsing certificate...
	I0417 19:40:32.883710  121874 main.go:141] libmachine: Running pre-create checks...
	I0417 19:40:32.883729  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .PreCreateCheck
	I0417 19:40:32.884060  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetConfigRaw
	I0417 19:40:32.884489  121874 main.go:141] libmachine: Creating machine...
	I0417 19:40:32.884505  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Create
	I0417 19:40:32.884669  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Creating KVM machine...
	I0417 19:40:32.885975  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found existing default KVM network
	I0417 19:40:32.887537  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:32.887337  122362 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012de30}
	I0417 19:40:32.887567  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | created network xml: 
	I0417 19:40:32.887584  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | <network>
	I0417 19:40:32.887594  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   <name>mk-kubernetes-upgrade-365550</name>
	I0417 19:40:32.887603  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   <dns enable='no'/>
	I0417 19:40:32.887611  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   
	I0417 19:40:32.887622  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0417 19:40:32.887634  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |     <dhcp>
	I0417 19:40:32.887684  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0417 19:40:32.887704  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |     </dhcp>
	I0417 19:40:32.887713  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   </ip>
	I0417 19:40:32.887723  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG |   
	I0417 19:40:32.887735  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | </network>
	I0417 19:40:32.887746  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | 
	I0417 19:40:32.893234  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | trying to create private KVM network mk-kubernetes-upgrade-365550 192.168.39.0/24...
	I0417 19:40:32.964766  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | private KVM network mk-kubernetes-upgrade-365550 192.168.39.0/24 created
	I0417 19:40:32.965020  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting up store path in /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550 ...
	I0417 19:40:32.965068  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Building disk image from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 19:40:32.965081  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:32.965001  122362 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:40:32.965213  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Downloading /home/jenkins/minikube-integration/18665-75973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0417 19:40:33.197166  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:33.197030  122362 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa...
	I0417 19:40:33.501037  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:33.500880  122362 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/kubernetes-upgrade-365550.rawdisk...
	I0417 19:40:33.501068  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Writing magic tar header
	I0417 19:40:33.501082  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Writing SSH key tar header
	I0417 19:40:33.501091  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:33.501021  122362 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550 ...
	I0417 19:40:33.501187  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550 (perms=drwx------)
	I0417 19:40:33.501232  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550
	I0417 19:40:33.501250  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube/machines (perms=drwxr-xr-x)
	I0417 19:40:33.501276  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973/.minikube (perms=drwxr-xr-x)
	I0417 19:40:33.501292  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube/machines
	I0417 19:40:33.501307  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:40:33.501322  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins/minikube-integration/18665-75973 (perms=drwxrwxr-x)
	I0417 19:40:33.501339  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0417 19:40:33.501353  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0417 19:40:33.501366  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Creating domain...
	I0417 19:40:33.501386  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18665-75973
	I0417 19:40:33.501401  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0417 19:40:33.501413  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home/jenkins
	I0417 19:40:33.501428  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Checking permissions on dir: /home
	I0417 19:40:33.501440  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Skipping /home - not owner
	I0417 19:40:33.502502  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) define libvirt domain using xml: 
	I0417 19:40:33.502516  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) <domain type='kvm'>
	I0417 19:40:33.502528  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <name>kubernetes-upgrade-365550</name>
	I0417 19:40:33.502539  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <memory unit='MiB'>2200</memory>
	I0417 19:40:33.502548  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <vcpu>2</vcpu>
	I0417 19:40:33.502557  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <features>
	I0417 19:40:33.502569  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <acpi/>
	I0417 19:40:33.502574  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <apic/>
	I0417 19:40:33.502585  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <pae/>
	I0417 19:40:33.502598  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     
	I0417 19:40:33.502608  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   </features>
	I0417 19:40:33.502625  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <cpu mode='host-passthrough'>
	I0417 19:40:33.502636  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   
	I0417 19:40:33.502651  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   </cpu>
	I0417 19:40:33.502667  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <os>
	I0417 19:40:33.502678  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <type>hvm</type>
	I0417 19:40:33.502690  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <boot dev='cdrom'/>
	I0417 19:40:33.502701  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <boot dev='hd'/>
	I0417 19:40:33.502713  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <bootmenu enable='no'/>
	I0417 19:40:33.502727  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   </os>
	I0417 19:40:33.502770  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   <devices>
	I0417 19:40:33.502795  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <disk type='file' device='cdrom'>
	I0417 19:40:33.502813  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/boot2docker.iso'/>
	I0417 19:40:33.502826  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <target dev='hdc' bus='scsi'/>
	I0417 19:40:33.502839  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <readonly/>
	I0417 19:40:33.502850  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </disk>
	I0417 19:40:33.502863  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <disk type='file' device='disk'>
	I0417 19:40:33.502876  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0417 19:40:33.502907  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <source file='/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/kubernetes-upgrade-365550.rawdisk'/>
	I0417 19:40:33.502926  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <target dev='hda' bus='virtio'/>
	I0417 19:40:33.502940  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </disk>
	I0417 19:40:33.502953  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <interface type='network'>
	I0417 19:40:33.502965  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <source network='mk-kubernetes-upgrade-365550'/>
	I0417 19:40:33.502977  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <model type='virtio'/>
	I0417 19:40:33.502989  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </interface>
	I0417 19:40:33.503061  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <interface type='network'>
	I0417 19:40:33.503095  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <source network='default'/>
	I0417 19:40:33.503122  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <model type='virtio'/>
	I0417 19:40:33.503134  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </interface>
	I0417 19:40:33.503147  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <serial type='pty'>
	I0417 19:40:33.503159  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <target port='0'/>
	I0417 19:40:33.503171  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </serial>
	I0417 19:40:33.503184  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <console type='pty'>
	I0417 19:40:33.503204  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <target type='serial' port='0'/>
	I0417 19:40:33.503214  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </console>
	I0417 19:40:33.503221  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     <rng model='virtio'>
	I0417 19:40:33.503235  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)       <backend model='random'>/dev/random</backend>
	I0417 19:40:33.503246  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     </rng>
	I0417 19:40:33.503258  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     
	I0417 19:40:33.503271  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)     
	I0417 19:40:33.503283  121874 main.go:141] libmachine: (kubernetes-upgrade-365550)   </devices>
	I0417 19:40:33.503290  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) </domain>
	I0417 19:40:33.503301  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) 
	I0417 19:40:33.507800  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:a1:43:4e in network default
	I0417 19:40:33.508367  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Ensuring networks are active...
	I0417 19:40:33.508386  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:33.509094  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Ensuring network default is active
	I0417 19:40:33.509408  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Ensuring network mk-kubernetes-upgrade-365550 is active
	I0417 19:40:33.509990  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Getting domain xml...
	I0417 19:40:33.510758  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Creating domain...
	I0417 19:40:34.706046  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Waiting to get IP...
	I0417 19:40:34.707053  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:34.707529  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:34.707583  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:34.707525  122362 retry.go:31] will retry after 201.6124ms: waiting for machine to come up
	I0417 19:40:34.911063  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:34.911534  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:34.911564  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:34.911486  122362 retry.go:31] will retry after 388.817365ms: waiting for machine to come up
	I0417 19:40:35.302285  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:35.302797  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:35.302821  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:35.302749  122362 retry.go:31] will retry after 306.521101ms: waiting for machine to come up
	I0417 19:40:35.611142  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:35.611679  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:35.611721  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:35.611644  122362 retry.go:31] will retry after 596.995987ms: waiting for machine to come up
	I0417 19:40:36.210700  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:36.211205  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:36.211593  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:36.211148  122362 retry.go:31] will retry after 491.077212ms: waiting for machine to come up
	I0417 19:40:36.703814  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:36.704343  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:36.704378  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:36.704266  122362 retry.go:31] will retry after 759.577198ms: waiting for machine to come up
	I0417 19:40:37.465857  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:37.466377  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:37.466423  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:37.466346  122362 retry.go:31] will retry after 876.014201ms: waiting for machine to come up
	I0417 19:40:38.344850  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:38.345519  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:38.345544  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:38.345467  122362 retry.go:31] will retry after 1.094356218s: waiting for machine to come up
	I0417 19:40:39.442073  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:39.442565  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:39.442595  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:39.442515  122362 retry.go:31] will retry after 1.664157261s: waiting for machine to come up
	I0417 19:40:41.108707  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:41.109284  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:41.109307  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:41.109249  122362 retry.go:31] will retry after 2.06547274s: waiting for machine to come up
	I0417 19:40:43.176053  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:43.176584  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:43.176615  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:43.176542  122362 retry.go:31] will retry after 2.338862597s: waiting for machine to come up
	I0417 19:40:45.518074  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:45.518585  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:45.518617  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:45.518534  122362 retry.go:31] will retry after 3.333290405s: waiting for machine to come up
	I0417 19:40:48.853414  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:48.853908  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:48.853941  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:48.853854  122362 retry.go:31] will retry after 2.761280033s: waiting for machine to come up
	I0417 19:40:51.618073  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:51.618529  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:40:51.618555  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:40:51.618492  122362 retry.go:31] will retry after 3.446766339s: waiting for machine to come up
	I0417 19:40:55.067957  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.068514  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Found IP for machine: 192.168.39.51
	I0417 19:40:55.068564  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has current primary IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.068575  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Reserving static IP address...
	I0417 19:40:55.068982  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-365550", mac: "52:54:00:6b:0b:f3", ip: "192.168.39.51"} in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.146461  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Getting to WaitForSSH function...
	I0417 19:40:55.146490  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Reserved static IP address: 192.168.39.51
	I0417 19:40:55.146504  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Waiting for SSH to be available...
	I0417 19:40:55.149647  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.150127  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.150166  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.150352  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Using SSH client type: external
	I0417 19:40:55.150382  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Using SSH private key: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa (-rw-------)
	I0417 19:40:55.150463  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0417 19:40:55.150494  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | About to run SSH command:
	I0417 19:40:55.150532  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | exit 0
	I0417 19:40:55.276805  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | SSH cmd err, output: <nil>: 
	I0417 19:40:55.277093  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) KVM machine creation complete!
	I0417 19:40:55.277434  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetConfigRaw
	I0417 19:40:55.278012  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:55.278247  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:55.278415  121874 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0417 19:40:55.278431  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetState
	I0417 19:40:55.279842  121874 main.go:141] libmachine: Detecting operating system of created instance...
	I0417 19:40:55.279858  121874 main.go:141] libmachine: Waiting for SSH to be available...
	I0417 19:40:55.279864  121874 main.go:141] libmachine: Getting to WaitForSSH function...
	I0417 19:40:55.279870  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.282177  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.282530  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.282576  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.282659  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:55.282868  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.283078  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.283232  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:55.283443  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:55.283687  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:55.283714  121874 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0417 19:40:55.388367  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:40:55.388393  121874 main.go:141] libmachine: Detecting the provisioner...
	I0417 19:40:55.388404  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.391470  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.391938  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.391973  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.392153  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:55.392366  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.392557  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.392720  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:55.392907  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:55.393188  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:55.393216  121874 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0417 19:40:55.502518  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0417 19:40:55.502608  121874 main.go:141] libmachine: found compatible host: buildroot
	I0417 19:40:55.502625  121874 main.go:141] libmachine: Provisioning with buildroot...
	I0417 19:40:55.502638  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetMachineName
	I0417 19:40:55.502897  121874 buildroot.go:166] provisioning hostname "kubernetes-upgrade-365550"
	I0417 19:40:55.502934  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetMachineName
	I0417 19:40:55.503118  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.505851  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.506248  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.506298  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.506483  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:55.506687  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.506857  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.507031  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:55.507229  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:55.507436  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:55.507456  121874 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-365550 && echo "kubernetes-upgrade-365550" | sudo tee /etc/hostname
	I0417 19:40:55.640844  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-365550
	
	I0417 19:40:55.640879  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.643808  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.644123  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.644154  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.644361  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:55.644551  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.644721  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.644890  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:55.645054  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:55.645265  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:55.645289  121874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-365550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-365550/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-365550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:40:55.754868  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:40:55.754905  121874 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 19:40:55.754937  121874 buildroot.go:174] setting up certificates
	I0417 19:40:55.754952  121874 provision.go:84] configureAuth start
	I0417 19:40:55.754966  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetMachineName
	I0417 19:40:55.755348  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetIP
	I0417 19:40:55.757982  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.758388  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.758438  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.758527  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.760640  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.760971  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.761000  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.761120  121874 provision.go:143] copyHostCerts
	I0417 19:40:55.761187  121874 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 19:40:55.761218  121874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:40:55.761272  121874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 19:40:55.761369  121874 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 19:40:55.761379  121874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:40:55.761399  121874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 19:40:55.761445  121874 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 19:40:55.761452  121874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:40:55.761468  121874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 19:40:55.761510  121874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-365550 san=[127.0.0.1 192.168.39.51 kubernetes-upgrade-365550 localhost minikube]
	I0417 19:40:55.909714  121874 provision.go:177] copyRemoteCerts
	I0417 19:40:55.909780  121874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:40:55.909814  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:55.912365  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.912686  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:55.912721  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:55.912883  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:55.913113  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:55.913270  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:55.913418  121874 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:40:55.996305  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0417 19:40:56.022795  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:40:56.048373  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0417 19:40:56.075166  121874 provision.go:87] duration metric: took 320.199661ms to configureAuth
	I0417 19:40:56.075193  121874 buildroot.go:189] setting minikube options for container-runtime
	I0417 19:40:56.075413  121874 config.go:182] Loaded profile config "kubernetes-upgrade-365550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0417 19:40:56.075513  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:56.077964  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.078296  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.078344  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.078475  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:56.078652  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.078797  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.078963  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:56.079123  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:56.079293  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:56.079313  121874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:40:56.358857  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:40:56.358890  121874 main.go:141] libmachine: Checking connection to Docker...
	I0417 19:40:56.358901  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetURL
	I0417 19:40:56.360123  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Using libvirt version 6000000
	I0417 19:40:56.362695  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.363031  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.363064  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.363254  121874 main.go:141] libmachine: Docker is up and running!
	I0417 19:40:56.363272  121874 main.go:141] libmachine: Reticulating splines...
	I0417 19:40:56.363281  121874 client.go:171] duration metric: took 23.479817844s to LocalClient.Create
	I0417 19:40:56.363310  121874 start.go:167] duration metric: took 23.479890057s to libmachine.API.Create "kubernetes-upgrade-365550"
	I0417 19:40:56.363323  121874 start.go:293] postStartSetup for "kubernetes-upgrade-365550" (driver="kvm2")
	I0417 19:40:56.363338  121874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:40:56.363359  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:56.363611  121874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:40:56.363639  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:56.365885  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.366178  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.366219  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.366386  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:56.366543  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.366694  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:56.366917  121874 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:40:56.452338  121874 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:40:56.457457  121874 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:40:56.457486  121874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:40:56.457546  121874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:40:56.457617  121874 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:40:56.457740  121874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:40:56.470518  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:40:56.498566  121874 start.go:296] duration metric: took 135.22487ms for postStartSetup
	I0417 19:40:56.498647  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetConfigRaw
	I0417 19:40:56.499286  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetIP
	I0417 19:40:56.502412  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.502809  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.502839  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.503044  121874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/config.json ...
	I0417 19:40:56.503258  121874 start.go:128] duration metric: took 23.641266198s to createHost
	I0417 19:40:56.503289  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:56.506126  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.506556  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.506595  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.506863  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:56.507090  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.507291  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.507456  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:56.507676  121874 main.go:141] libmachine: Using SSH client type: native
	I0417 19:40:56.507870  121874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0417 19:40:56.507895  121874 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0417 19:40:56.613963  121874 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713382856.551961723
	
	I0417 19:40:56.613986  121874 fix.go:216] guest clock: 1713382856.551961723
	I0417 19:40:56.613994  121874 fix.go:229] Guest: 2024-04-17 19:40:56.551961723 +0000 UTC Remote: 2024-04-17 19:40:56.503273093 +0000 UTC m=+52.221737286 (delta=48.68863ms)
	I0417 19:40:56.614044  121874 fix.go:200] guest clock delta is within tolerance: 48.68863ms
	I0417 19:40:56.614052  121874 start.go:83] releasing machines lock for "kubernetes-upgrade-365550", held for 23.752221473s
	I0417 19:40:56.614082  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:56.614363  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetIP
	I0417 19:40:56.617742  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.618208  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.618238  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.618723  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:56.619392  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:56.619593  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:40:56.619720  121874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:40:56.619772  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:56.619896  121874 ssh_runner.go:195] Run: cat /version.json
	I0417 19:40:56.619929  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:40:56.623040  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.623281  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.623440  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.623478  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.623645  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:56.623770  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:56.623802  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.623836  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:56.624006  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:56.624021  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:40:56.624179  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:40:56.624186  121874 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:40:56.624359  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:40:56.624551  121874 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:40:56.702287  121874 ssh_runner.go:195] Run: systemctl --version
	I0417 19:40:56.734067  121874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:40:56.911197  121874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 19:40:56.918097  121874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:40:56.918167  121874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:40:56.935688  121874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0417 19:40:56.935722  121874 start.go:494] detecting cgroup driver to use...
	I0417 19:40:56.935795  121874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:40:56.954004  121874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:40:56.968873  121874 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:40:56.968947  121874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:40:56.983409  121874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:40:56.998653  121874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:40:57.123482  121874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:40:57.308966  121874 docker.go:233] disabling docker service ...
	I0417 19:40:57.309037  121874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:40:57.324564  121874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:40:57.338869  121874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:40:57.482840  121874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:40:57.648654  121874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:40:57.665130  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:40:57.688030  121874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0417 19:40:57.688091  121874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:40:57.702429  121874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:40:57.702496  121874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:40:57.715101  121874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:40:57.727397  121874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:40:57.741171  121874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:40:57.754227  121874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:40:57.765844  121874 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0417 19:40:57.765916  121874 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0417 19:40:57.782021  121874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:40:57.793216  121874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:40:57.929291  121874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:40:58.099011  121874 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:40:58.099112  121874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:40:58.104856  121874 start.go:562] Will wait 60s for crictl version
	I0417 19:40:58.104918  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:40:58.109238  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:40:58.150177  121874 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:40:58.150270  121874 ssh_runner.go:195] Run: crio --version
	I0417 19:40:58.180796  121874 ssh_runner.go:195] Run: crio --version
	I0417 19:40:58.226304  121874 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0417 19:40:58.227729  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetIP
	I0417 19:40:58.231279  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:58.231692  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:40:48 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:40:58.231735  121874 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:40:58.231883  121874 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0417 19:40:58.237607  121874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:40:58.253323  121874 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-365550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-365550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:40:58.253471  121874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 19:40:58.253538  121874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:40:58.299294  121874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0417 19:40:58.299397  121874 ssh_runner.go:195] Run: which lz4
	I0417 19:40:58.304297  121874 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0417 19:40:58.310662  121874 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0417 19:40:58.310705  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0417 19:41:00.367282  121874 crio.go:462] duration metric: took 2.063026596s to copy over tarball
	I0417 19:41:00.367369  121874 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0417 19:41:03.325434  121874 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.958038444s)
	I0417 19:41:03.325473  121874 crio.go:469] duration metric: took 2.958154402s to extract the tarball
	I0417 19:41:03.325483  121874 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 19:41:03.386727  121874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:41:03.464287  121874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0417 19:41:03.464315  121874 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0417 19:41:03.464382  121874 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:41:03.464399  121874 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0417 19:41:03.464426  121874 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0417 19:41:03.464462  121874 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0417 19:41:03.464488  121874 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0417 19:41:03.464490  121874 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0417 19:41:03.464467  121874 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0417 19:41:03.464463  121874 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0417 19:41:03.465778  121874 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0417 19:41:03.466106  121874 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0417 19:41:03.466184  121874 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:41:03.466239  121874 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0417 19:41:03.466107  121874 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0417 19:41:03.466185  121874 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0417 19:41:03.466105  121874 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0417 19:41:03.466229  121874 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0417 19:41:03.613449  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0417 19:41:03.615681  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0417 19:41:03.616937  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0417 19:41:03.622886  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0417 19:41:03.628599  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0417 19:41:03.629600  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0417 19:41:03.644404  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0417 19:41:03.759005  121874 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0417 19:41:03.759058  121874 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0417 19:41:03.759105  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.802086  121874 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0417 19:41:03.802142  121874 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0417 19:41:03.802198  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.802215  121874 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0417 19:41:03.802235  121874 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0417 19:41:03.802253  121874 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0417 19:41:03.802261  121874 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0417 19:41:03.802295  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.802295  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.810019  121874 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0417 19:41:03.810063  121874 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0417 19:41:03.810108  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.814161  121874 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0417 19:41:03.814240  121874 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0417 19:41:03.814325  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.819581  121874 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0417 19:41:03.819637  121874 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0417 19:41:03.819685  121874 ssh_runner.go:195] Run: which crictl
	I0417 19:41:03.819751  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0417 19:41:03.819809  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0417 19:41:03.819876  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0417 19:41:03.824634  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0417 19:41:03.824689  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0417 19:41:03.830147  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0417 19:41:03.835548  121874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0417 19:41:03.930454  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0417 19:41:03.942111  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0417 19:41:03.942146  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0417 19:41:03.983030  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0417 19:41:03.983079  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0417 19:41:03.984798  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0417 19:41:03.993735  121874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0417 19:41:04.088616  121874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0417 19:41:04.239893  121874 cache_images.go:92] duration metric: took 775.559531ms to LoadCachedImages
	W0417 19:41:04.240028  121874 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18665-75973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0417 19:41:04.240052  121874 kubeadm.go:928] updating node { 192.168.39.51 8443 v1.20.0 crio true true} ...
	I0417 19:41:04.240216  121874 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-365550 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-365550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:41:04.240324  121874 ssh_runner.go:195] Run: crio config
	I0417 19:41:04.300067  121874 cni.go:84] Creating CNI manager for ""
	I0417 19:41:04.300090  121874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:41:04.300103  121874 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:41:04.300122  121874 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-365550 NodeName:kubernetes-upgrade-365550 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0417 19:41:04.300250  121874 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-365550"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:41:04.300315  121874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0417 19:41:04.311094  121874 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:41:04.311172  121874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:41:04.321631  121874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0417 19:41:04.339412  121874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0417 19:41:04.601478  121874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0417 19:41:04.630954  121874 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0417 19:41:04.637517  121874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:41:04.650785  121874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:41:04.800703  121874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:41:04.821680  121874 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550 for IP: 192.168.39.51
	I0417 19:41:04.821706  121874 certs.go:194] generating shared ca certs ...
	I0417 19:41:04.821728  121874 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:04.821903  121874 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:41:04.821965  121874 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:41:04.821980  121874 certs.go:256] generating profile certs ...
	I0417 19:41:04.822060  121874 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.key
	I0417 19:41:04.822082  121874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.crt with IP's: []
	I0417 19:41:05.170521  121874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.crt ...
	I0417 19:41:05.170551  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.crt: {Name:mk624365ceb04b6c4a5f2afe863ea70580e48b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.251344  121874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.key ...
	I0417 19:41:05.251386  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/client.key: {Name:mk1f32f72e80f2ceea99338aaed4bd888a341ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.251531  121874 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key.ef246879
	I0417 19:41:05.251555  121874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt.ef246879 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.51]
	I0417 19:41:05.517170  121874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt.ef246879 ...
	I0417 19:41:05.517202  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt.ef246879: {Name:mk5ea05b5e4468eb49bf9e1de6f6a86f5d252801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.517366  121874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key.ef246879 ...
	I0417 19:41:05.517383  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key.ef246879: {Name:mke0679392e3643076db40d415589ea7e871ecc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.517457  121874 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt.ef246879 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt
	I0417 19:41:05.517530  121874 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key.ef246879 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key
	I0417 19:41:05.517589  121874 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.key
	I0417 19:41:05.517605  121874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.crt with IP's: []
	I0417 19:41:05.616667  121874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.crt ...
	I0417 19:41:05.616698  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.crt: {Name:mk42b6b530d3e0ebfefa03b40cd2caf96029e9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.616874  121874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.key ...
	I0417 19:41:05.616894  121874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.key: {Name:mkd3a65e312627ee99e50998f258acdebf451e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:41:05.617090  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:41:05.617130  121874 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:41:05.617140  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:41:05.617167  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:41:05.617189  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:41:05.617212  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:41:05.617247  121874 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:41:05.617862  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:41:05.649419  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:41:05.677751  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:41:05.704828  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:41:05.752490  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0417 19:41:05.781493  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:41:05.809904  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:41:05.838840  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kubernetes-upgrade-365550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0417 19:41:05.866459  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:41:05.892860  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:41:05.918285  121874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:41:05.943807  121874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:41:05.962947  121874 ssh_runner.go:195] Run: openssl version
	I0417 19:41:05.971405  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:41:05.986770  121874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:41:05.991786  121874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:41:05.991845  121874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:41:06.000058  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:41:06.015335  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:41:06.028383  121874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:41:06.033427  121874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:41:06.033485  121874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:41:06.039950  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:41:06.054105  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:41:06.066657  121874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:41:06.071454  121874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:41:06.071524  121874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:41:06.078169  121874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:41:06.090757  121874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:41:06.095005  121874 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 19:41:06.095074  121874 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-365550 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-365550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:41:06.095223  121874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:41:06.095299  121874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:41:06.139383  121874 cri.go:89] found id: ""
	I0417 19:41:06.139489  121874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 19:41:06.153653  121874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:41:06.165513  121874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:41:06.176440  121874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:41:06.176460  121874 kubeadm.go:156] found existing configuration files:
	
	I0417 19:41:06.176513  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:41:06.187759  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:41:06.187835  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:41:06.199350  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:41:06.213274  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:41:06.213331  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:41:06.225486  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:41:06.237008  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:41:06.237081  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:41:06.249373  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:41:06.260793  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:41:06.260868  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:41:06.272501  121874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 19:41:06.420078  121874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0417 19:41:06.420176  121874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:41:06.632972  121874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:41:06.633157  121874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:41:06.633306  121874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:41:06.828065  121874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:41:06.830289  121874 out.go:204]   - Generating certificates and keys ...
	I0417 19:41:06.830408  121874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:41:06.830519  121874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:41:07.071309  121874 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 19:41:07.241674  121874 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 19:41:07.385673  121874 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 19:41:07.518198  121874 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 19:41:07.814430  121874 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 19:41:07.814780  121874 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0417 19:41:08.253460  121874 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 19:41:08.258040  121874 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I0417 19:41:08.458980  121874 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 19:41:08.695469  121874 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 19:41:08.977298  121874 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 19:41:08.977453  121874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:41:09.214412  121874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:41:09.545792  121874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:41:09.734298  121874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:41:09.905757  121874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:41:09.928700  121874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:41:09.930089  121874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:41:09.930168  121874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:41:10.096631  121874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:41:10.098348  121874 out.go:204]   - Booting up control plane ...
	I0417 19:41:10.098500  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:41:10.108416  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:41:10.110393  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:41:10.112229  121874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:41:10.119171  121874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0417 19:41:50.059680  121874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0417 19:41:50.060536  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:41:50.060755  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:41:55.060399  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:41:55.060698  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:42:05.060635  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:42:05.060953  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:42:25.061399  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:42:25.061705  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:43:05.063676  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:43:05.064399  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:43:05.064421  121874 kubeadm.go:309] 
	I0417 19:43:05.064456  121874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0417 19:43:05.064493  121874 kubeadm.go:309] 		timed out waiting for the condition
	I0417 19:43:05.064501  121874 kubeadm.go:309] 
	I0417 19:43:05.064530  121874 kubeadm.go:309] 	This error is likely caused by:
	I0417 19:43:05.064560  121874 kubeadm.go:309] 		- The kubelet is not running
	I0417 19:43:05.064724  121874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0417 19:43:05.064749  121874 kubeadm.go:309] 
	I0417 19:43:05.064896  121874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0417 19:43:05.064973  121874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0417 19:43:05.065048  121874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0417 19:43:05.065070  121874 kubeadm.go:309] 
	I0417 19:43:05.065212  121874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0417 19:43:05.065340  121874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0417 19:43:05.065351  121874 kubeadm.go:309] 
	I0417 19:43:05.065477  121874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0417 19:43:05.065597  121874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0417 19:43:05.065713  121874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0417 19:43:05.065813  121874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0417 19:43:05.065824  121874 kubeadm.go:309] 
	I0417 19:43:05.066360  121874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 19:43:05.066439  121874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0417 19:43:05.066494  121874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0417 19:43:05.066668  121874 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-365550 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0417 19:43:05.066728  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0417 19:43:06.031759  121874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:43:06.047269  121874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:43:06.058015  121874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:43:06.058037  121874 kubeadm.go:156] found existing configuration files:
	
	I0417 19:43:06.058095  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:43:06.068137  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:43:06.068195  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:43:06.078455  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:43:06.088106  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:43:06.088167  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:43:06.098756  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:43:06.108412  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:43:06.108476  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:43:06.118633  121874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:43:06.128564  121874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:43:06.128633  121874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:43:06.138749  121874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 19:43:06.204530  121874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0417 19:43:06.204625  121874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:43:06.357135  121874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:43:06.357318  121874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:43:06.357447  121874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:43:06.561681  121874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:43:06.563863  121874 out.go:204]   - Generating certificates and keys ...
	I0417 19:43:06.563962  121874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:43:06.564048  121874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:43:06.564143  121874 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0417 19:43:06.564228  121874 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0417 19:43:06.564360  121874 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0417 19:43:06.564450  121874 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0417 19:43:06.565068  121874 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0417 19:43:06.565980  121874 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0417 19:43:06.566819  121874 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0417 19:43:06.567665  121874 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0417 19:43:06.567975  121874 kubeadm.go:309] [certs] Using the existing "sa" key
	I0417 19:43:06.568058  121874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:43:06.688426  121874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:43:07.045507  121874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:43:07.113443  121874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:43:07.270120  121874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:43:07.285257  121874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:43:07.286297  121874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:43:07.286348  121874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:43:07.431747  121874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:43:07.433910  121874 out.go:204]   - Booting up control plane ...
	I0417 19:43:07.434040  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:43:07.443792  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:43:07.445245  121874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:43:07.446073  121874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:43:07.448406  121874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0417 19:43:47.449884  121874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0417 19:43:47.450381  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:43:47.450678  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:43:52.451389  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:43:52.451667  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:44:02.452394  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:44:02.452594  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:44:22.453677  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:44:22.453889  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:45:02.457799  121874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0417 19:45:02.458051  121874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0417 19:45:02.458058  121874 kubeadm.go:309] 
	I0417 19:45:02.458119  121874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0417 19:45:02.458168  121874 kubeadm.go:309] 		timed out waiting for the condition
	I0417 19:45:02.458175  121874 kubeadm.go:309] 
	I0417 19:45:02.458218  121874 kubeadm.go:309] 	This error is likely caused by:
	I0417 19:45:02.458258  121874 kubeadm.go:309] 		- The kubelet is not running
	I0417 19:45:02.458395  121874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0417 19:45:02.458408  121874 kubeadm.go:309] 
	I0417 19:45:02.458586  121874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0417 19:45:02.458647  121874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0417 19:45:02.458703  121874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0417 19:45:02.458720  121874 kubeadm.go:309] 
	I0417 19:45:02.458851  121874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0417 19:45:02.458972  121874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0417 19:45:02.458989  121874 kubeadm.go:309] 
	I0417 19:45:02.459137  121874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0417 19:45:02.459264  121874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0417 19:45:02.459370  121874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0417 19:45:02.459460  121874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0417 19:45:02.459467  121874 kubeadm.go:309] 
	I0417 19:45:02.461081  121874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 19:45:02.461182  121874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0417 19:45:02.461248  121874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0417 19:45:02.461321  121874 kubeadm.go:393] duration metric: took 3m56.36625536s to StartCluster
	I0417 19:45:02.461373  121874 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0417 19:45:02.461433  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0417 19:45:02.524198  121874 cri.go:89] found id: ""
	I0417 19:45:02.524233  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.524245  121874 logs.go:278] No container was found matching "kube-apiserver"
	I0417 19:45:02.524254  121874 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0417 19:45:02.524325  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0417 19:45:02.565154  121874 cri.go:89] found id: ""
	I0417 19:45:02.565188  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.565200  121874 logs.go:278] No container was found matching "etcd"
	I0417 19:45:02.565207  121874 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0417 19:45:02.565272  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0417 19:45:02.611586  121874 cri.go:89] found id: ""
	I0417 19:45:02.611618  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.611629  121874 logs.go:278] No container was found matching "coredns"
	I0417 19:45:02.611637  121874 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0417 19:45:02.611739  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0417 19:45:02.653980  121874 cri.go:89] found id: ""
	I0417 19:45:02.654004  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.654012  121874 logs.go:278] No container was found matching "kube-scheduler"
	I0417 19:45:02.654019  121874 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0417 19:45:02.654061  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0417 19:45:02.701529  121874 cri.go:89] found id: ""
	I0417 19:45:02.701558  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.701570  121874 logs.go:278] No container was found matching "kube-proxy"
	I0417 19:45:02.701583  121874 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0417 19:45:02.701648  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0417 19:45:02.750817  121874 cri.go:89] found id: ""
	I0417 19:45:02.750853  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.750865  121874 logs.go:278] No container was found matching "kube-controller-manager"
	I0417 19:45:02.750876  121874 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0417 19:45:02.750944  121874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0417 19:45:02.797049  121874 cri.go:89] found id: ""
	I0417 19:45:02.797078  121874 logs.go:276] 0 containers: []
	W0417 19:45:02.797088  121874 logs.go:278] No container was found matching "kindnet"
	I0417 19:45:02.797101  121874 logs.go:123] Gathering logs for kubelet ...
	I0417 19:45:02.797114  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0417 19:45:02.856041  121874 logs.go:123] Gathering logs for dmesg ...
	I0417 19:45:02.856116  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0417 19:45:02.872576  121874 logs.go:123] Gathering logs for describe nodes ...
	I0417 19:45:02.872618  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0417 19:45:03.019723  121874 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0417 19:45:03.019747  121874 logs.go:123] Gathering logs for CRI-O ...
	I0417 19:45:03.019761  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0417 19:45:03.144334  121874 logs.go:123] Gathering logs for container status ...
	I0417 19:45:03.144378  121874 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0417 19:45:03.200726  121874 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0417 19:45:03.200807  121874 out.go:239] * 
	* 
	W0417 19:45:03.200890  121874 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0417 19:45:03.200928  121874 out.go:239] * 
	* 
	W0417 19:45:03.202001  121874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0417 19:45:03.206197  121874 out.go:177] 
	W0417 19:45:03.207791  121874 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0417 19:45:03.207865  121874 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0417 19:45:03.207895  121874 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0417 19:45:03.209784  121874 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-365550
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-365550: (1.637156112s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-365550 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-365550 status --format={{.Host}}: exit status 7 (98.109121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.913642226s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-365550 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (96.988463ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-365550] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-365550
	    minikube start -p kubernetes-upgrade-365550 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3655502 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-365550 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-365550 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.693913124s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-17 19:47:18.786178095 +0000 UTC m=+6503.508166285
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-365550 -n kubernetes-upgrade-365550
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-365550 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-365550 logs -n 25: (3.991035458s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558                             | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo systemctl cat docker                            |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | docker system info                                   |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558                             | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo cat                    | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo cat                    | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558                             | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo cat                    | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558                             | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |                |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-450558 sudo                        | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | crio config                                          |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-450558                         | enable-default-cni-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo cat /etc/nsswitch.conf                          |                           |         |                |                     |                     |
	| delete  | -p custom-flannel-450558                             | custom-flannel-450558     | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	| ssh     | -p enable-default-cni-450558                         | enable-default-cni-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo cat /etc/hosts                                  |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-450558                         | enable-default-cni-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo cat /etc/resolv.conf                            |                           |         |                |                     |                     |
	| start   | -p bridge-450558 --memory=3072                       | bridge-450558             | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-450558                         | enable-default-cni-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo crictl pods                                     |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-450558                         | enable-default-cni-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:47 UTC | 17 Apr 24 19:47 UTC |
	|         | sudo crictl ps --all                                 |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:47:17
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:47:17.507034  132575 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:47:17.507395  132575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:47:17.507426  132575 out.go:304] Setting ErrFile to fd 2...
	I0417 19:47:17.507437  132575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:47:17.507645  132575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:47:17.508252  132575 out.go:298] Setting JSON to false
	I0417 19:47:17.509568  132575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12586,"bootTime":1713370652,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:47:17.509645  132575 start.go:139] virtualization: kvm guest
	I0417 19:47:17.512148  132575 out.go:177] * [bridge-450558] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:47:17.514102  132575 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:47:17.515543  132575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:47:17.514172  132575 notify.go:220] Checking for updates...
	I0417 19:47:17.518089  132575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:47:17.519401  132575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:47:17.520889  132575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:47:17.522413  132575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:47:17.524469  132575 config.go:182] Loaded profile config "enable-default-cni-450558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:47:17.524624  132575 config.go:182] Loaded profile config "flannel-450558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:47:17.524755  132575 config.go:182] Loaded profile config "kubernetes-upgrade-365550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:47:17.524938  132575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:47:17.574171  132575 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 19:47:17.575410  132575 start.go:297] selected driver: kvm2
	I0417 19:47:17.575432  132575 start.go:901] validating driver "kvm2" against <nil>
	I0417 19:47:17.575461  132575 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:47:17.576457  132575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:47:17.576573  132575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:47:17.598995  132575 install.go:137] /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:47:17.599138  132575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:47:17.599527  132575 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:47:17.599687  132575 cni.go:84] Creating CNI manager for "bridge"
	I0417 19:47:17.599711  132575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 19:47:17.599804  132575 start.go:340] cluster config:
	{Name:bridge-450558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:bridge-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:47:17.599989  132575 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:47:17.602502  132575 out.go:177] * Starting "bridge-450558" primary control-plane node in "bridge-450558" cluster
	I0417 19:47:17.603896  132575 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:47:17.603980  132575 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:47:17.604005  132575 cache.go:56] Caching tarball of preloaded images
	I0417 19:47:17.604189  132575 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:47:17.604234  132575 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:47:17.604403  132575 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/config.json ...
	I0417 19:47:17.604447  132575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/config.json: {Name:mkd7dcda4044305dd7cf017d752deb4a9bc21c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:47:17.604706  132575 start.go:360] acquireMachinesLock for bridge-450558: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:47:17.604822  132575 start.go:364] duration metric: took 73.655µs to acquireMachinesLock for "bridge-450558"
	I0417 19:47:17.604904  132575 start.go:93] Provisioning new machine with config: &{Name:bridge-450558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:bridge-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:47:17.605020  132575 start.go:125] createHost starting for "" (driver="kvm2")
	I0417 19:47:17.301944  128702 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:47:17.301968  128702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0417 19:47:17.301993  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:47:17.302449  128702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I0417 19:47:17.302964  128702 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:47:17.303587  128702 main.go:141] libmachine: Using API Version  1
	I0417 19:47:17.303612  128702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:47:17.304171  128702 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:47:17.304900  128702 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2
	I0417 19:47:17.304949  128702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:47:17.306406  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:47:17.306912  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:45:19 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:47:17.306941  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:47:17.307266  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:47:17.307440  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:47:17.307571  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:47:17.307688  128702 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:47:17.324617  128702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0417 19:47:17.326325  128702 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:47:17.326904  128702 main.go:141] libmachine: Using API Version  1
	I0417 19:47:17.326935  128702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:47:17.327428  128702 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:47:17.327712  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetState
	I0417 19:47:17.329941  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .DriverName
	I0417 19:47:17.330283  128702 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0417 19:47:17.330304  128702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0417 19:47:17.330327  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHHostname
	I0417 19:47:17.335670  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:47:17.336297  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0b:f3", ip: ""} in network mk-kubernetes-upgrade-365550: {Iface:virbr3 ExpiryTime:2024-04-17 20:45:19 +0000 UTC Type:0 Mac:52:54:00:6b:0b:f3 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:kubernetes-upgrade-365550 Clientid:01:52:54:00:6b:0b:f3}
	I0417 19:47:17.336324  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined IP address 192.168.39.51 and MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:47:17.336603  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHPort
	I0417 19:47:17.336816  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHKeyPath
	I0417 19:47:17.336995  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .GetSSHUsername
	I0417 19:47:17.337161  128702 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/kubernetes-upgrade-365550/id_rsa Username:docker}
	I0417 19:47:17.557498  128702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:47:17.586015  128702 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:47:17.586086  128702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:47:17.608747  128702 api_server.go:72] duration metric: took 365.234288ms to wait for apiserver process to appear ...
	I0417 19:47:17.608788  128702 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:47:17.608811  128702 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0417 19:47:17.620927  128702 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0417 19:47:17.622472  128702 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 19:47:17.622500  128702 api_server.go:131] duration metric: took 13.702514ms to wait for apiserver health ...
	I0417 19:47:17.622510  128702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:47:17.631112  128702 system_pods.go:59] 8 kube-system pods found
	I0417 19:47:17.631136  128702 system_pods.go:61] "coredns-7db6d8ff4d-6srns" [0e5adde5-2b9a-4e7a-b464-cc6e44a1969b] Running
	I0417 19:47:17.631140  128702 system_pods.go:61] "coredns-7db6d8ff4d-mcwpc" [9b9e3327-f67d-4963-83e1-e9696f17bee4] Running
	I0417 19:47:17.631146  128702 system_pods.go:61] "etcd-kubernetes-upgrade-365550" [61f79693-f450-42ea-a58f-12d78a396440] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0417 19:47:17.631153  128702 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-365550" [dabd6d13-beb7-461f-8ef4-76430959fcf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0417 19:47:17.631163  128702 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-365550" [01b30836-7483-456b-bb44-aaf3cc5710b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0417 19:47:17.631168  128702 system_pods.go:61] "kube-proxy-4g6lf" [13afa0bd-e6a7-4898-a241-58ec78818248] Running
	I0417 19:47:17.631173  128702 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-365550" [eed22bff-740f-4b4e-bfdf-d9b780026fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0417 19:47:17.631176  128702 system_pods.go:61] "storage-provisioner" [22ff1cb9-178d-4bc5-a779-4c5be38fc031] Running
	I0417 19:47:17.631182  128702 system_pods.go:74] duration metric: took 8.665153ms to wait for pod list to return data ...
	I0417 19:47:17.631192  128702 kubeadm.go:576] duration metric: took 387.685846ms to wait for: map[apiserver:true system_pods:true]
	I0417 19:47:17.631204  128702 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:47:17.636086  128702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 19:47:17.636100  128702 node_conditions.go:123] node cpu capacity is 2
	I0417 19:47:17.636109  128702 node_conditions.go:105] duration metric: took 4.900312ms to run NodePressure ...
	I0417 19:47:17.636121  128702 start.go:240] waiting for startup goroutines ...
	I0417 19:47:17.682859  128702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0417 19:47:17.789754  128702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0417 19:47:17.933540  128702 main.go:141] libmachine: Making call to close driver server
	I0417 19:47:17.933563  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Close
	I0417 19:47:17.933845  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Closing plugin on server side
	I0417 19:47:17.933888  128702 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:47:17.933896  128702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:47:17.933908  128702 main.go:141] libmachine: Making call to close driver server
	I0417 19:47:17.933917  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Close
	I0417 19:47:17.934167  128702 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:47:17.934181  128702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:47:17.943987  128702 main.go:141] libmachine: Making call to close driver server
	I0417 19:47:17.944013  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Close
	I0417 19:47:17.944360  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Closing plugin on server side
	I0417 19:47:17.944384  128702 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:47:17.944394  128702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:47:18.680984  128702 main.go:141] libmachine: Making call to close driver server
	I0417 19:47:18.681012  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Close
	I0417 19:47:18.681512  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Closing plugin on server side
	I0417 19:47:18.681630  128702 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:47:18.681652  128702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:47:18.681664  128702 main.go:141] libmachine: Making call to close driver server
	I0417 19:47:18.681673  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) Calling .Close
	I0417 19:47:18.681969  128702 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | Closing plugin on server side
	I0417 19:47:18.681997  128702 main.go:141] libmachine: Successfully made call to close driver server
	I0417 19:47:18.682004  128702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0417 19:47:18.684746  128702 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0417 19:47:18.687299  128702 addons.go:505] duration metric: took 1.443678349s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0417 19:47:18.687342  128702 start.go:245] waiting for cluster config update ...
	I0417 19:47:18.687357  128702 start.go:254] writing updated cluster config ...
	I0417 19:47:18.687638  128702 ssh_runner.go:195] Run: rm -f paused
	I0417 19:47:18.759086  128702 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 19:47:18.761902  128702 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-365550" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.808313482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4d3ccfb-b5a8-49a4-8306-5e9113587b85 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.811894741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e635a4ef-45d7-4e9f-b345-91b832fbe7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.812521896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383239812484659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e635a4ef-45d7-4e9f-b345-91b832fbe7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.813360540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fc1ebd1-ced3-483c-892d-90c8e6660054 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.813488662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fc1ebd1-ced3-483c-892d-90c8e6660054 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.814045374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:353f67208f0876f24dcd62e16fb0cc2d99870975f22002025eb5afadbf011edb,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236265100748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2addcbc66471967b45c5c2eef13b22fa9889514eb187d05e2664658421f07579,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236277183712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66840eaa6998d5b02de707fecfbfe772bf7a933855824094db979b7fb7f889a8,PodSandboxId:0161ff09888e874b01682019804ef8c0109df45361d7590261504f8e8d8e9250,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713383236295156593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092e00f08965fa6bd7a47b0705d0e1d1caa6b5558c354535082b85dd9e2dd8c0,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNIN
G,CreatedAt:1713383232363891454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9a1d885fddb851186bf36a081baa935589529f8907dadffe98980d30ddad8,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RU
NNING,CreatedAt:1713383232349402340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c91d2d2de4d20c4886ecddc22c3e2455fa0be23f69f1fc1ddbe021857b412b84,PodSandboxId:03d7eb68bdd96ea79bb6d11b1231e630c8ba48996ca20705cef3e366e1777e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_RUNNING,CreatedAt:1713383231705970743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073bc7cfd96265edca700505bec0dc8ec9a8bd65f047f5cf8c824da89de002bc,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383
209820953443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8cb9aaa48860742bfbac0e35ee55940070dd5fad4c745c19115ddf81bf3d4a,PodSandboxId:dcc94c466cf0a96b2c23ad942917e5108e584f8a259d4d1b4ea89fbae854f6ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383206830868
049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:640d8f6944751745344fd4ae943b0d9fc6ce571efecea6f84709b7a5ca0bd92d,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383203024952931,Labels:map[string
]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11e4a2ba2a506dd4c30918fa2ab342275025be4ac3e93e5901395a57624a905,PodSandboxId:97ba41a2f71594ce14d0203ec0f8de39931a67b6b5b53f130d24bb04be6e2865,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383202304999073,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5790d7a480b415c6adb5a37444c1be8cb055525954919a4ac785c4060fef9bf2,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194756567959,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99ca559b6a6e0ddade0bb2479b67ef58d0b33bd913ca0aee41e915ccf07546ae,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194708997889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcff4f7350292e5102e5fca538da1ff3ea15754668b1a3bf4818e47be46159c,PodSandboxId:75314126158b2f28f290cbeaacd8d6ed
532774a60a1ef3c8045267e329e00029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713383190911003620,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f266c8204ebe49b71fa663b900d97fc3125d201f06115f7bb13886182ee3ff4,PodSandboxId:4fa46516f6b8a817729590cde8226d2a15e677ef37093e
96d5102881be19f184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383190951584633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606de76f183675d47338fb9e7f6143f76e40250b3d4436885a4a584396efbc62,PodSandboxId:3e829452ce675f9f92312ed90126ec24960aaa4bd43d41a6719e11ed64e4fa73,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383190896947543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a8fab9bdfcddbfbe2ee4bc7d086f13b05d26170e592f1e48470b5dd15c97bc,PodSandboxId:6abba6929fc95bdfddad3568cf1b435f27c39b7b3318613876461b6b458578c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383190567408560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fc1ebd1-ced3-483c-892d-90c8e6660054 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.904879850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d8b5933-fbd2-4ca0-819a-5ecde4f4c954 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.905024021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d8b5933-fbd2-4ca0-819a-5ecde4f4c954 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.906455932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c46838b3-dbb7-4ac1-9bf5-f63cca568c86 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.907039126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383239907000722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c46838b3-dbb7-4ac1-9bf5-f63cca568c86 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.908193236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad52d421-1e93-437e-a433-bf2d5d61a2f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.908371151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad52d421-1e93-437e-a433-bf2d5d61a2f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.909162233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:353f67208f0876f24dcd62e16fb0cc2d99870975f22002025eb5afadbf011edb,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236265100748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2addcbc66471967b45c5c2eef13b22fa9889514eb187d05e2664658421f07579,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236277183712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66840eaa6998d5b02de707fecfbfe772bf7a933855824094db979b7fb7f889a8,PodSandboxId:0161ff09888e874b01682019804ef8c0109df45361d7590261504f8e8d8e9250,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713383236295156593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092e00f08965fa6bd7a47b0705d0e1d1caa6b5558c354535082b85dd9e2dd8c0,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNIN
G,CreatedAt:1713383232363891454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9a1d885fddb851186bf36a081baa935589529f8907dadffe98980d30ddad8,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RU
NNING,CreatedAt:1713383232349402340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c91d2d2de4d20c4886ecddc22c3e2455fa0be23f69f1fc1ddbe021857b412b84,PodSandboxId:03d7eb68bdd96ea79bb6d11b1231e630c8ba48996ca20705cef3e366e1777e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_RUNNING,CreatedAt:1713383231705970743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073bc7cfd96265edca700505bec0dc8ec9a8bd65f047f5cf8c824da89de002bc,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383
209820953443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8cb9aaa48860742bfbac0e35ee55940070dd5fad4c745c19115ddf81bf3d4a,PodSandboxId:dcc94c466cf0a96b2c23ad942917e5108e584f8a259d4d1b4ea89fbae854f6ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383206830868
049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:640d8f6944751745344fd4ae943b0d9fc6ce571efecea6f84709b7a5ca0bd92d,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383203024952931,Labels:map[string
]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11e4a2ba2a506dd4c30918fa2ab342275025be4ac3e93e5901395a57624a905,PodSandboxId:97ba41a2f71594ce14d0203ec0f8de39931a67b6b5b53f130d24bb04be6e2865,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383202304999073,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5790d7a480b415c6adb5a37444c1be8cb055525954919a4ac785c4060fef9bf2,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194756567959,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99ca559b6a6e0ddade0bb2479b67ef58d0b33bd913ca0aee41e915ccf07546ae,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194708997889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcff4f7350292e5102e5fca538da1ff3ea15754668b1a3bf4818e47be46159c,PodSandboxId:75314126158b2f28f290cbeaacd8d6ed
532774a60a1ef3c8045267e329e00029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713383190911003620,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f266c8204ebe49b71fa663b900d97fc3125d201f06115f7bb13886182ee3ff4,PodSandboxId:4fa46516f6b8a817729590cde8226d2a15e677ef37093e
96d5102881be19f184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383190951584633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606de76f183675d47338fb9e7f6143f76e40250b3d4436885a4a584396efbc62,PodSandboxId:3e829452ce675f9f92312ed90126ec24960aaa4bd43d41a6719e11ed64e4fa73,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383190896947543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a8fab9bdfcddbfbe2ee4bc7d086f13b05d26170e592f1e48470b5dd15c97bc,PodSandboxId:6abba6929fc95bdfddad3568cf1b435f27c39b7b3318613876461b6b458578c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383190567408560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad52d421-1e93-437e-a433-bf2d5d61a2f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.978726086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d26be404-f8e5-4389-b278-abd5e1bde982 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.978865468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d26be404-f8e5-4389-b278-abd5e1bde982 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.980577623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=370428d7-51c6-476a-b9f9-e9b5669b48a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.981222511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383239981193182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=370428d7-51c6-476a-b9f9-e9b5669b48a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.981955705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dd12573-5aa0-4c95-9761-e5c55588920f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.982037504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dd12573-5aa0-4c95-9761-e5c55588920f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:19 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:19.983219737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:353f67208f0876f24dcd62e16fb0cc2d99870975f22002025eb5afadbf011edb,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236265100748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2addcbc66471967b45c5c2eef13b22fa9889514eb187d05e2664658421f07579,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236277183712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66840eaa6998d5b02de707fecfbfe772bf7a933855824094db979b7fb7f889a8,PodSandboxId:0161ff09888e874b01682019804ef8c0109df45361d7590261504f8e8d8e9250,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713383236295156593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092e00f08965fa6bd7a47b0705d0e1d1caa6b5558c354535082b85dd9e2dd8c0,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNIN
G,CreatedAt:1713383232363891454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9a1d885fddb851186bf36a081baa935589529f8907dadffe98980d30ddad8,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RU
NNING,CreatedAt:1713383232349402340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c91d2d2de4d20c4886ecddc22c3e2455fa0be23f69f1fc1ddbe021857b412b84,PodSandboxId:03d7eb68bdd96ea79bb6d11b1231e630c8ba48996ca20705cef3e366e1777e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_RUNNING,CreatedAt:1713383231705970743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073bc7cfd96265edca700505bec0dc8ec9a8bd65f047f5cf8c824da89de002bc,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383
209820953443,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8cb9aaa48860742bfbac0e35ee55940070dd5fad4c745c19115ddf81bf3d4a,PodSandboxId:dcc94c466cf0a96b2c23ad942917e5108e584f8a259d4d1b4ea89fbae854f6ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383206830868
049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:640d8f6944751745344fd4ae943b0d9fc6ce571efecea6f84709b7a5ca0bd92d,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383203024952931,Labels:map[string
]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11e4a2ba2a506dd4c30918fa2ab342275025be4ac3e93e5901395a57624a905,PodSandboxId:97ba41a2f71594ce14d0203ec0f8de39931a67b6b5b53f130d24bb04be6e2865,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383202304999073,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5790d7a480b415c6adb5a37444c1be8cb055525954919a4ac785c4060fef9bf2,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194756567959,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99ca559b6a6e0ddade0bb2479b67ef58d0b33bd913ca0aee41e915ccf07546ae,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383194708997889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bcff4f7350292e5102e5fca538da1ff3ea15754668b1a3bf4818e47be46159c,PodSandboxId:75314126158b2f28f290cbeaacd8d6ed
532774a60a1ef3c8045267e329e00029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713383190911003620,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f266c8204ebe49b71fa663b900d97fc3125d201f06115f7bb13886182ee3ff4,PodSandboxId:4fa46516f6b8a817729590cde8226d2a15e677ef37093e
96d5102881be19f184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383190951584633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606de76f183675d47338fb9e7f6143f76e40250b3d4436885a4a584396efbc62,PodSandboxId:3e829452ce675f9f92312ed90126ec24960aaa4bd43d41a6719e11ed64e4fa73,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383190896947543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a8fab9bdfcddbfbe2ee4bc7d086f13b05d26170e592f1e48470b5dd15c97bc,PodSandboxId:6abba6929fc95bdfddad3568cf1b435f27c39b7b3318613876461b6b458578c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383190567408560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dd12573-5aa0-4c95-9761-e5c55588920f name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:20 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:20.070714145Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da24b43c-5bc7-4147-a67b-677f9487f1b8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:47:20 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:20.071068107Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mcwpc,Uid:9b9e3327-f67d-4963-83e1-e9696f17bee4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383194184096098,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:45:55.150908988Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6srns,Uid:0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383194146162857,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:45:55.175415694Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dcc94c466cf0a96b2c23ad942917e5108e584f8a259d4d1b4ea89fbae854f6ee,Metadata:&PodSandboxMetadata{Name:kube-proxy-4g6lf,Uid:13afa0bd-e6a7-4898-a241-58ec78818248,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193724600174,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:45:54.961810554Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0161ff09888e874b01682019804ef8c0109df45361d7590261504f8e8d8e9250,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:22ff1cb9-178d-4bc5-a779-4c5be38fc031,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193672598260,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-17T19:45:54.450685886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-365550,Uid:228b8d5b544e19450628d6fab93396a2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193628789786,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6f
ab93396a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.51:8443,kubernetes.io/config.hash: 228b8d5b544e19450628d6fab93396a2,kubernetes.io/config.seen: 2024-04-17T19:45:34.734611847Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-365550,Uid:56aadf20b7b0412ec9ced5f7c6d4a32c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193628200018,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 56aadf20b7b0412ec9ced5f7c6d4a32c,kubernetes.io/config.seen: 2024-04-17T19:45:
34.734685862Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:03d7eb68bdd96ea79bb6d11b1231e630c8ba48996ca20705cef3e366e1777e26,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-365550,Uid:103087c0876aecf7fccfb38fccad7341,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193553094531,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.51:2379,kubernetes.io/config.hash: 103087c0876aecf7fccfb38fccad7341,kubernetes.io/config.seen: 2024-04-17T19:45:34.772399664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97ba41a2f71594ce14d0203ec0f8de39931a67b6b5b53f130d24bb04be6e2865,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-365550,Uid:556
f0d4e5725cbfb0d0fa73d2c1fe483,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713383193522166622,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 556f0d4e5725cbfb0d0fa73d2c1fe483,kubernetes.io/config.seen: 2024-04-17T19:45:34.734688310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=da24b43c-5bc7-4147-a67b-677f9487f1b8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:47:20 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:20.071973829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=046cdcb0-2248-415e-a3d4-771d81f5cd5d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:20 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:20.072092534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=046cdcb0-2248-415e-a3d4-771d81f5cd5d name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:47:20 kubernetes-upgrade-365550 crio[2985]: time="2024-04-17 19:47:20.072420080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:353f67208f0876f24dcd62e16fb0cc2d99870975f22002025eb5afadbf011edb,PodSandboxId:4a2cecf46b33d9c8a1c3ef2e3aceb9c5d44ac17d75bc7bb5360b80910671f36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236265100748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6srns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5adde5-2b9a-4e7a-b464-cc6e44a1969b,},Annotations:map[string]string{io.kubernetes.container.hash: ae3c90ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2addcbc66471967b45c5c2eef13b22fa9889514eb187d05e2664658421f07579,PodSandboxId:8de46b0fe7b669f4a361e501a6e94c1692341162e4dd2258474c3e34fd283b2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383236277183712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mcwpc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9b9e3327-f67d-4963-83e1-e9696f17bee4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d6f9bd2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66840eaa6998d5b02de707fecfbfe772bf7a933855824094db979b7fb7f889a8,PodSandboxId:0161ff09888e874b01682019804ef8c0109df45361d7590261504f8e8d8e9250,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1713383236295156593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ff1cb9-178d-4bc5-a779-4c5be38fc031,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092e00f08965fa6bd7a47b0705d0e1d1caa6b5558c354535082b85dd9e2dd8c0,PodSandboxId:c122581c1236863310f15944bafc9f1cf7cde96897f7e8b1a6fc0e97c0bba533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNIN
G,CreatedAt:1713383232363891454,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228b8d5b544e19450628d6fab93396a2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c6466df,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9a1d885fddb851186bf36a081baa935589529f8907dadffe98980d30ddad8,PodSandboxId:b0bb1cf77cdc13dac596021e670430f4ab492a8d08523a7883e79e7399715403,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RU
NNING,CreatedAt:1713383232349402340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aadf20b7b0412ec9ced5f7c6d4a32c,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c91d2d2de4d20c4886ecddc22c3e2455fa0be23f69f1fc1ddbe021857b412b84,PodSandboxId:03d7eb68bdd96ea79bb6d11b1231e630c8ba48996ca20705cef3e366e1777e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER
_RUNNING,CreatedAt:1713383231705970743,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 103087c0876aecf7fccfb38fccad7341,},Annotations:map[string]string{io.kubernetes.container.hash: 54fb014c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8cb9aaa48860742bfbac0e35ee55940070dd5fad4c745c19115ddf81bf3d4a,PodSandboxId:dcc94c466cf0a96b2c23ad942917e5108e584f8a259d4d1b4ea89fbae854f6ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383206
830868049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4g6lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13afa0bd-e6a7-4898-a241-58ec78818248,},Annotations:map[string]string{io.kubernetes.container.hash: da4dd785,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11e4a2ba2a506dd4c30918fa2ab342275025be4ac3e93e5901395a57624a905,PodSandboxId:97ba41a2f71594ce14d0203ec0f8de39931a67b6b5b53f130d24bb04be6e2865,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383202304999073,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-365550,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556f0d4e5725cbfb0d0fa73d2c1fe483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=046cdcb0-2248-415e-a3d4-771d81f5cd5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66840eaa6998d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   0161ff09888e8       storage-provisioner
	2addcbc664719       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   8de46b0fe7b66       coredns-7db6d8ff4d-mcwpc
	353f67208f087       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   4a2cecf46b33d       coredns-7db6d8ff4d-6srns
	092e00f08965f       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   8 seconds ago       Running             kube-apiserver            3                   c122581c12368       kube-apiserver-kubernetes-upgrade-365550
	cbe9a1d885fdd       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   8 seconds ago       Running             kube-controller-manager   3                   b0bb1cf77cdc1       kube-controller-manager-kubernetes-upgrade-365550
	c91d2d2de4d20       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 seconds ago       Running             etcd                      2                   03d7eb68bdd96       etcd-kubernetes-upgrade-365550
	073bc7cfd9626       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   30 seconds ago      Exited              kube-apiserver            2                   c122581c12368       kube-apiserver-kubernetes-upgrade-365550
	da8cb9aaa4886       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   33 seconds ago      Running             kube-proxy                2                   dcc94c466cf0a       kube-proxy-4g6lf
	640d8f6944751       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   37 seconds ago      Exited              kube-controller-manager   2                   b0bb1cf77cdc1       kube-controller-manager-kubernetes-upgrade-365550
	e11e4a2ba2a50       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   38 seconds ago      Running             kube-scheduler            2                   97ba41a2f7159       kube-scheduler-kubernetes-upgrade-365550
	5790d7a480b41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago      Exited              coredns                   1                   4a2cecf46b33d       coredns-7db6d8ff4d-6srns
	99ca559b6a6e0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago      Exited              coredns                   1                   8de46b0fe7b66       coredns-7db6d8ff4d-mcwpc
	7f266c8204ebe       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   49 seconds ago      Exited              kube-proxy                1                   4fa46516f6b8a       kube-proxy-4g6lf
	7bcff4f735029       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   49 seconds ago      Exited              storage-provisioner       1                   75314126158b2       storage-provisioner
	606de76f18367       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   49 seconds ago      Exited              etcd                      1                   3e829452ce675       etcd-kubernetes-upgrade-365550
	d2a8fab9bdfcd       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   50 seconds ago      Exited              kube-scheduler            1                   6abba6929fc95       kube-scheduler-kubernetes-upgrade-365550
	
	
	==> coredns [2addcbc66471967b45c5c2eef13b22fa9889514eb187d05e2664658421f07579] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [353f67208f0876f24dcd62e16fb0cc2d99870975f22002025eb5afadbf011edb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5790d7a480b415c6adb5a37444c1be8cb055525954919a4ac785c4060fef9bf2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [99ca559b6a6e0ddade0bb2479b67ef58d0b33bd913ca0aee41e915ccf07546ae] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-365550
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-365550
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-365550
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:47:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:47:15 +0000   Wed, 17 Apr 2024 19:45:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:47:15 +0000   Wed, 17 Apr 2024 19:45:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:47:15 +0000   Wed, 17 Apr 2024 19:45:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:47:15 +0000   Wed, 17 Apr 2024 19:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    kubernetes-upgrade-365550
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa54b79d9e8e474590a053da3df6bea4
	  System UUID:                fa54b79d-9e8e-4745-90a0-53da3df6bea4
	  Boot ID:                    f6490b1a-a052-452b-889b-6bddc504d122
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6srns                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 coredns-7db6d8ff4d-mcwpc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-kubernetes-upgrade-365550                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-kubernetes-upgrade-365550             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-365550    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-4g6lf                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-kubernetes-upgrade-365550             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 2s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasSufficientMemory
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                  node-controller  Node kubernetes-upgrade-365550 event: Registered Node kubernetes-upgrade-365550 in Controller
	  Normal  Starting                 31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)    kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)    kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)    kubelet          Node kubernetes-upgrade-365550 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                  kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.068386] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070490] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.223800] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.109970] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.292166] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +4.773257] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.065090] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.212663] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[ +13.588422] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.082341] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.170792] kauditd_printk_skb: 21 callbacks suppressed
	[Apr17 19:46] kauditd_printk_skb: 76 callbacks suppressed
	[ +28.730349] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.188640] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.211190] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +0.217599] systemd-fstab-generator[2252]: Ignoring "noauto" option for root device
	[  +1.311002] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +1.933082] systemd-fstab-generator[3267]: Ignoring "noauto" option for root device
	[  +1.259692] kauditd_printk_skb: 290 callbacks suppressed
	[  +7.563370] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.529761] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.820601] systemd-fstab-generator[4187]: Ignoring "noauto" option for root device
	[Apr17 19:47] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.148918] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.866142] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	
	
	==> etcd [606de76f183675d47338fb9e7f6143f76e40250b3d4436885a4a584396efbc62] <==
	{"level":"warn","ts":"2024-04-17T19:46:31.576565Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-17T19:46:31.576677Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.51:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.51:2380","--initial-cluster=kubernetes-upgrade-365550=https://192.168.39.51:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.51:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.51:2380","--name=kubernetes-upgrade-365550","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-04-17T19:46:31.576762Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-04-17T19:46:31.57679Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-17T19:46:31.576799Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2024-04-17T19:46:31.576834Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:46:31.58793Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.51:2379"]}
	{"level":"info","ts":"2024-04-17T19:46:31.588108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-365550","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.51:2380"],"listen-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.51:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","i
nitial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-04-17T19:46:31.668612Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"69.840451ms"}
	{"level":"info","ts":"2024-04-17T19:46:31.696747Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-17T19:46:31.753378Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","commit-index":411}
	{"level":"info","ts":"2024-04-17T19:46:31.753628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-17T19:46:31.753747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became follower at term 2"}
	{"level":"info","ts":"2024-04-17T19:46:31.753882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9049a3446d48952a [peers: [], term: 2, commit: 411, applied: 0, lastindex: 411, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-17T19:46:31.758688Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-17T19:46:31.813277Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":396}
	{"level":"info","ts":"2024-04-17T19:46:31.827734Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	
	
	==> etcd [c91d2d2de4d20c4886ecddc22c3e2455fa0be23f69f1fc1ddbe021857b412b84] <==
	{"level":"info","ts":"2024-04-17T19:47:12.953582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:47:12.953647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:47:12.961074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.51:2379"}
	{"level":"info","ts":"2024-04-17T19:47:12.961288Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:47:12.9654Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:47:12.970552Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T19:47:20.889419Z","caller":"traceutil/trace.go:171","msg":"trace[607090580] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"131.429223ms","start":"2024-04-17T19:47:20.757915Z","end":"2024-04-17T19:47:20.889344Z","steps":["trace[607090580] 'read index received'  (duration: 37.047492ms)","trace[607090580] 'applied index is now lower than readState.Index'  (duration: 94.38092ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:47:20.889474Z","caller":"traceutil/trace.go:171","msg":"trace[1152805] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"312.311322ms","start":"2024-04-17T19:47:20.577139Z","end":"2024-04-17T19:47:20.889451Z","steps":["trace[1152805] 'process raft request'  (duration: 217.884268ms)","trace[1152805] 'compare'  (duration: 94.164718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:47:20.889695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.664068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1125"}
	{"level":"info","ts":"2024-04-17T19:47:20.889775Z","caller":"traceutil/trace.go:171","msg":"trace[1942630163] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:421; }","duration":"131.88648ms","start":"2024-04-17T19:47:20.757878Z","end":"2024-04-17T19:47:20.889764Z","steps":["trace[1942630163] 'agreement among raft nodes before linearized reading'  (duration: 131.607566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:47:20.889936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:47:20.577119Z","time spent":"312.403042ms","remote":"127.0.0.1:33078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":761,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" value_size:676 lease:1525188574948976938 >> failure:<>"}
	{"level":"warn","ts":"2024-04-17T19:47:21.243884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.45111ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10748560611803752828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:418 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1037 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:47:21.244097Z","caller":"traceutil/trace.go:171","msg":"trace[116540071] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"282.996664ms","start":"2024-04-17T19:47:20.961089Z","end":"2024-04-17T19:47:21.244086Z","steps":["trace[116540071] 'process raft request'  (duration: 282.898033ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:47:21.244461Z","caller":"traceutil/trace.go:171","msg":"trace[1065669018] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"347.049658ms","start":"2024-04-17T19:47:20.897398Z","end":"2024-04-17T19:47:21.244447Z","steps":["trace[1065669018] 'process raft request'  (duration: 161.823788ms)","trace[1065669018] 'compare'  (duration: 184.329417ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:47:21.244535Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:47:20.89738Z","time spent":"347.116812ms","remote":"127.0.0.1:33194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1110,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:418 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1037 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-17T19:47:21.605517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.578102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10748560611803752831 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" mod_revision:421 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" value_size:676 lease:1525188574948976938 >> failure:<request_range:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:47:21.605635Z","caller":"traceutil/trace.go:171","msg":"trace[1025197443] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"353.841059ms","start":"2024-04-17T19:47:21.251778Z","end":"2024-04-17T19:47:21.605619Z","steps":["trace[1025197443] 'process raft request'  (duration: 219.021319ms)","trace[1025197443] 'compare'  (duration: 134.38079ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:47:21.60571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:47:21.251766Z","time spent":"353.902439ms","remote":"127.0.0.1:33078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":761,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" mod_revision:421 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" value_size:676 lease:1525188574948976938 >> failure:<request_range:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e21122\" > >"}
	{"level":"warn","ts":"2024-04-17T19:47:21.868127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.884647ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10748560611803752835 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e28e05\" mod_revision:424 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e28e05\" value_size:670 lease:1525188574948976938 >> failure:<request_range:<key:\"/registry/events/default/kubernetes-upgrade-365550.17c728f417e28e05\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:47:21.8683Z","caller":"traceutil/trace.go:171","msg":"trace[1160742097] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:460; }","duration":"175.722086ms","start":"2024-04-17T19:47:21.692564Z","end":"2024-04-17T19:47:21.868286Z","steps":["trace[1160742097] 'read index received'  (duration: 52.563822ms)","trace[1160742097] 'applied index is now lower than readState.Index'  (duration: 123.156525ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T19:47:21.868365Z","caller":"traceutil/trace.go:171","msg":"trace[1835939792] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"186.573865ms","start":"2024-04-17T19:47:21.681782Z","end":"2024-04-17T19:47:21.868356Z","steps":["trace[1835939792] 'process raft request'  (duration: 63.393095ms)","trace[1835939792] 'compare'  (duration: 122.795872ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:47:21.868523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.946093ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-17T19:47:21.868629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.702552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-365550\" ","response":"range_response_count:1 size:4622"}
	{"level":"info","ts":"2024-04-17T19:47:21.868694Z","caller":"traceutil/trace.go:171","msg":"trace[461230984] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-365550; range_end:; response_count:1; response_revision:427; }","duration":"170.7874ms","start":"2024-04-17T19:47:21.697895Z","end":"2024-04-17T19:47:21.868683Z","steps":["trace[461230984] 'agreement among raft nodes before linearized reading'  (duration: 170.683134ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:47:21.868638Z","caller":"traceutil/trace.go:171","msg":"trace[1185278348] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:427; }","duration":"176.070346ms","start":"2024-04-17T19:47:21.692555Z","end":"2024-04-17T19:47:21.868625Z","steps":["trace[1185278348] 'agreement among raft nodes before linearized reading'  (duration: 175.891443ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:47:22 up 2 min,  0 users,  load average: 1.29, 0.62, 0.23
	Linux kubernetes-upgrade-365550 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [073bc7cfd96265edca700505bec0dc8ec9a8bd65f047f5cf8c824da89de002bc] <==
	I0417 19:46:50.021137       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0417 19:46:50.474461       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:50.475351       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0417 19:46:50.475460       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0417 19:46:50.480003       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:46:50.480932       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0417 19:46:50.480979       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0417 19:46:50.481163       1 instance.go:299] Using reconciler: lease
	W0417 19:46:50.482360       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:51.475845       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:51.475845       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:51.483627       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:52.947883       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:53.058357       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:53.074024       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:55.155847       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:55.568804       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:55.970798       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:59.174600       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:46:59.994823       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:47:00.333786       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:47:04.819531       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:47:06.639672       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0417 19:47:06.869685       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0417 19:47:10.482999       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [092e00f08965fa6bd7a47b0705d0e1d1caa6b5558c354535082b85dd9e2dd8c0] <==
	I0417 19:47:14.961869       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0417 19:47:14.961906       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0417 19:47:14.986900       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:47:14.994917       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:47:14.995123       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 19:47:14.995197       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:47:14.995370       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0417 19:47:14.995513       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:47:14.995906       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:47:14.997704       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:47:14.997822       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:47:14.997944       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:47:14.997978       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:47:15.057126       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:47:15.058366       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 19:47:15.058385       1 policy_source.go:224] refreshing policies
	I0417 19:47:15.059903       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 19:47:15.087351       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:47:15.898397       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0417 19:47:16.679072       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:47:17.022524       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 19:47:17.045863       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 19:47:17.098403       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 19:47:17.194987       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:47:17.203879       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [640d8f6944751745344fd4ae943b0d9fc6ce571efecea6f84709b7a5ca0bd92d] <==
	I0417 19:46:44.268288       1 serving.go:380] Generated self-signed cert in-memory
	I0417 19:46:44.488568       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.2"
	I0417 19:46:44.488660       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:46:44.490277       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:46:44.490404       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:46:44.490704       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0417 19:46:44.490731       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0417 19:47:11.491169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.51:8443/healthz\": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:45582->192.168.39.51:8443: read: connection reset by peer"
	
	
	==> kube-controller-manager [cbe9a1d885fddb851186bf36a081baa935589529f8907dadffe98980d30ddad8] <==
	I0417 19:47:13.114278       1 serving.go:380] Generated self-signed cert in-memory
	I0417 19:47:13.381066       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.2"
	I0417 19:47:13.381148       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:47:13.383213       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:47:13.383457       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:47:13.383954       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0417 19:47:13.384051       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 19:47:16.951774       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0417 19:47:16.952005       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0417 19:47:17.053198       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [7f266c8204ebe49b71fa663b900d97fc3125d201f06115f7bb13886182ee3ff4] <==
	
	
	==> kube-proxy [da8cb9aaa48860742bfbac0e35ee55940070dd5fad4c745c19115ddf81bf3d4a] <==
	I0417 19:46:46.979788       1 server_linux.go:69] "Using iptables proxy"
	E0417 19:46:46.981969       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-365550\": dial tcp 192.168.39.51:8443: connect: connection refused"
	E0417 19:46:48.050804       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-365550\": dial tcp 192.168.39.51:8443: connect: connection refused"
	E0417 19:47:00.398544       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-365550\": net/http: TLS handshake timeout"
	E0417 19:47:11.491942       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-365550\": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60562->192.168.39.51:8443: read: connection reset by peer"
	I0417 19:47:19.738680       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.51"]
	I0417 19:47:19.838915       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:47:19.838992       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:47:19.839013       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:47:19.843350       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:47:19.843766       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:47:19.843995       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:47:19.846004       1 config.go:192] "Starting service config controller"
	I0417 19:47:19.846450       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:47:19.846668       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:47:19.846698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:47:19.848432       1 config.go:319] "Starting node config controller"
	I0417 19:47:19.849440       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:47:19.947543       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:47:19.947605       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:47:19.950131       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2a8fab9bdfcddbfbe2ee4bc7d086f13b05d26170e592f1e48470b5dd15c97bc] <==
	
	
	==> kube-scheduler [e11e4a2ba2a506dd4c30918fa2ab342275025be4ac3e93e5901395a57624a905] <==
	E0417 19:47:10.659176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.51:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:10.678798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:10.678968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:11.227835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.51:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:11.227904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.51:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:11.336415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.51:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:11.336469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.51:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:11.490285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.51:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60582->192.168.39.51:8443: read: connection reset by peer
	E0417 19:47:11.490363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.51:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60582->192.168.39.51:8443: read: connection reset by peer
	W0417 19:47:11.490352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.51:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60578->192.168.39.51:8443: read: connection reset by peer
	E0417 19:47:11.490404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.51:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60578->192.168.39.51:8443: read: connection reset by peer
	W0417 19:47:11.490473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.51:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60596->192.168.39.51:8443: read: connection reset by peer
	E0417 19:47:11.490498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.51:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.51:60596->192.168.39.51:8443: read: connection reset by peer
	W0417 19:47:11.711791       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.51:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:11.711924       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.51:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:12.177436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:12.177494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.51:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:12.282195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:12.282349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.51:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:12.651974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.51:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	E0417 19:47:12.652073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.51:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.51:8443: connect: connection refused
	W0417 19:47:14.930980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:47:14.931105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:47:14.931338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 19:47:14.931444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	Apr 17 19:47:12 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:12.326422    4194 scope.go:117] "RemoveContainer" containerID="073bc7cfd96265edca700505bec0dc8ec9a8bd65f047f5cf8c824da89de002bc"
	Apr 17 19:47:13 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:13.302865    4194 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-365550"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:14.971699    4194 apiserver.go:52] "Watching apiserver"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:14.979042    4194 topology_manager.go:215] "Topology Admit Handler" podUID="22ff1cb9-178d-4bc5-a779-4c5be38fc031" podNamespace="kube-system" podName="storage-provisioner"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:14.979431    4194 topology_manager.go:215] "Topology Admit Handler" podUID="13afa0bd-e6a7-4898-a241-58ec78818248" podNamespace="kube-system" podName="kube-proxy-4g6lf"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:14.979503    4194 topology_manager.go:215] "Topology Admit Handler" podUID="0e5adde5-2b9a-4e7a-b464-cc6e44a1969b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6srns"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:14.979547    4194 topology_manager.go:215] "Topology Admit Handler" podUID="9b9e3327-f67d-4963-83e1-e9696f17bee4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mcwpc"
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: W0417 19:47:14.990339    4194 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: E0417 19:47:14.990666    4194 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: W0417 19:47:14.991351    4194 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: E0417 19:47:14.991613    4194 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: W0417 19:47:14.992538    4194 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:14 kubernetes-upgrade-365550 kubelet[4194]: E0417 19:47:14.992731    4194 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-365550" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-365550' and this object
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.072001    4194 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.162071    4194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13afa0bd-e6a7-4898-a241-58ec78818248-lib-modules\") pod \"kube-proxy-4g6lf\" (UID: \"13afa0bd-e6a7-4898-a241-58ec78818248\") " pod="kube-system/kube-proxy-4g6lf"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.162317    4194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/22ff1cb9-178d-4bc5-a779-4c5be38fc031-tmp\") pod \"storage-provisioner\" (UID: \"22ff1cb9-178d-4bc5-a779-4c5be38fc031\") " pod="kube-system/storage-provisioner"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.163304    4194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13afa0bd-e6a7-4898-a241-58ec78818248-xtables-lock\") pod \"kube-proxy-4g6lf\" (UID: \"13afa0bd-e6a7-4898-a241-58ec78818248\") " pod="kube-system/kube-proxy-4g6lf"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.164754    4194 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-365550"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.164909    4194 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-365550"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.170333    4194 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:15.173604    4194 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 17 19:47:15 kubernetes-upgrade-365550 kubelet[4194]: E0417 19:47:15.363475    4194 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-365550\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-365550"
	Apr 17 19:47:16 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:16.180360    4194 scope.go:117] "RemoveContainer" containerID="99ca559b6a6e0ddade0bb2479b67ef58d0b33bd913ca0aee41e915ccf07546ae"
	Apr 17 19:47:16 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:16.181327    4194 scope.go:117] "RemoveContainer" containerID="7bcff4f7350292e5102e5fca538da1ff3ea15754668b1a3bf4818e47be46159c"
	Apr 17 19:47:16 kubernetes-upgrade-365550 kubelet[4194]: I0417 19:47:16.182694    4194 scope.go:117] "RemoveContainer" containerID="5790d7a480b415c6adb5a37444c1be8cb055525954919a4ac785c4060fef9bf2"
	
	
	==> storage-provisioner [66840eaa6998d5b02de707fecfbfe772bf7a933855824094db979b7fb7f889a8] <==
	I0417 19:47:16.629441       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0417 19:47:16.662196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0417 19:47:16.662417       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0417 19:47:16.693384       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0417 19:47:16.695440       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-365550_7abf4ab0-493b-402e-b9b2-b8b3b4fd74c5!
	I0417 19:47:16.697383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3c37b81-6c3f-4bf9-b679-73af2ba49c82", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-365550_7abf4ab0-493b-402e-b9b2-b8b3b4fd74c5 became leader
	I0417 19:47:16.798085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-365550_7abf4ab0-493b-402e-b9b2-b8b3b4fd74c5!
	
	
	==> storage-provisioner [7bcff4f7350292e5102e5fca538da1ff3ea15754668b1a3bf4818e47be46159c] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-365550 -n kubernetes-upgrade-365550
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-365550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-365550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-365550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-365550: (1.185464373s)
--- FAIL: TestKubernetesUpgrade (440.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-646953 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-646953 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.929995689s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-646953] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-646953" primary control-plane node in "pause-646953" cluster
	* Updating the running kvm2 "pause-646953" VM ...
	* Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-646953" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:44:22.216214  124898 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:44:22.216495  124898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:44:22.216506  124898 out.go:304] Setting ErrFile to fd 2...
	I0417 19:44:22.216511  124898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:44:22.216723  124898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:44:22.217416  124898 out.go:298] Setting JSON to false
	I0417 19:44:22.218504  124898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12410,"bootTime":1713370652,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:44:22.218573  124898 start.go:139] virtualization: kvm guest
	I0417 19:44:22.220883  124898 out.go:177] * [pause-646953] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:44:22.222997  124898 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:44:22.223054  124898 notify.go:220] Checking for updates...
	I0417 19:44:22.224511  124898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:44:22.226085  124898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:44:22.227475  124898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:44:22.228905  124898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:44:22.230283  124898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:44:22.232078  124898 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:44:22.232501  124898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2
	I0417 19:44:22.232579  124898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:44:22.249650  124898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0417 19:44:22.250164  124898 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:44:22.250778  124898 main.go:141] libmachine: Using API Version  1
	I0417 19:44:22.250804  124898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:44:22.251181  124898 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:44:22.251421  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:22.251795  124898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:44:22.252073  124898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2
	I0417 19:44:22.252124  124898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:44:22.266807  124898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I0417 19:44:22.267234  124898 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:44:22.267740  124898 main.go:141] libmachine: Using API Version  1
	I0417 19:44:22.267763  124898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:44:22.268076  124898 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:44:22.268254  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:22.305611  124898 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 19:44:22.307190  124898 start.go:297] selected driver: kvm2
	I0417 19:44:22.307206  124898 start.go:901] validating driver "kvm2" against &{Name:pause-646953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0-rc.2 ClusterName:pause-646953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:44:22.307345  124898 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:44:22.307687  124898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:44:22.307760  124898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:44:22.323547  124898 install.go:137] /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:44:22.324553  124898 cni.go:84] Creating CNI manager for ""
	I0417 19:44:22.324578  124898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:44:22.324651  124898 start.go:340] cluster config:
	{Name:pause-646953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:pause-646953 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:44:22.324865  124898 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:44:22.326813  124898 out.go:177] * Starting "pause-646953" primary control-plane node in "pause-646953" cluster
	I0417 19:44:22.328159  124898 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:44:22.328191  124898 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:44:22.328198  124898 cache.go:56] Caching tarball of preloaded images
	I0417 19:44:22.328280  124898 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:44:22.328291  124898 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:44:22.328402  124898 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/config.json ...
	I0417 19:44:22.328592  124898 start.go:360] acquireMachinesLock for pause-646953: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:44:22.328637  124898 start.go:364] duration metric: took 25.189µs to acquireMachinesLock for "pause-646953"
	I0417 19:44:22.328652  124898 start.go:96] Skipping create...Using existing machine configuration
	I0417 19:44:22.328671  124898 fix.go:54] fixHost starting: 
	I0417 19:44:22.328977  124898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2
	I0417 19:44:22.329012  124898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:44:22.345317  124898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0417 19:44:22.345763  124898 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:44:22.346268  124898 main.go:141] libmachine: Using API Version  1
	I0417 19:44:22.346293  124898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:44:22.346652  124898 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:44:22.346885  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:22.347046  124898 main.go:141] libmachine: (pause-646953) Calling .GetState
	I0417 19:44:22.348658  124898 fix.go:112] recreateIfNeeded on pause-646953: state=Running err=<nil>
	W0417 19:44:22.348691  124898 fix.go:138] unexpected machine state, will restart: <nil>
	I0417 19:44:22.350712  124898 out.go:177] * Updating the running kvm2 "pause-646953" VM ...
	I0417 19:44:22.351965  124898 machine.go:94] provisionDockerMachine start ...
	I0417 19:44:22.351990  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:22.352224  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:22.354705  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.355181  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.355204  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.355331  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:22.355497  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.355645  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.355790  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:22.355928  124898 main.go:141] libmachine: Using SSH client type: native
	I0417 19:44:22.356177  124898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I0417 19:44:22.356192  124898 main.go:141] libmachine: About to run SSH command:
	hostname
	I0417 19:44:22.474365  124898 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-646953
	
	I0417 19:44:22.474402  124898 main.go:141] libmachine: (pause-646953) Calling .GetMachineName
	I0417 19:44:22.474702  124898 buildroot.go:166] provisioning hostname "pause-646953"
	I0417 19:44:22.474727  124898 main.go:141] libmachine: (pause-646953) Calling .GetMachineName
	I0417 19:44:22.474936  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:22.477815  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.478240  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.478271  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.478398  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:22.478586  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.478744  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.478873  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:22.479057  124898 main.go:141] libmachine: Using SSH client type: native
	I0417 19:44:22.479267  124898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I0417 19:44:22.479288  124898 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-646953 && echo "pause-646953" | sudo tee /etc/hostname
	I0417 19:44:22.613883  124898 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-646953
	
	I0417 19:44:22.613919  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:22.616878  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.617353  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.617394  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.617529  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:22.617725  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.617941  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.618113  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:22.618254  124898 main.go:141] libmachine: Using SSH client type: native
	I0417 19:44:22.618426  124898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I0417 19:44:22.618446  124898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-646953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-646953/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-646953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0417 19:44:22.742040  124898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0417 19:44:22.742091  124898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18665-75973/.minikube CaCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18665-75973/.minikube}
	I0417 19:44:22.742142  124898 buildroot.go:174] setting up certificates
	I0417 19:44:22.742159  124898 provision.go:84] configureAuth start
	I0417 19:44:22.742180  124898 main.go:141] libmachine: (pause-646953) Calling .GetMachineName
	I0417 19:44:22.742502  124898 main.go:141] libmachine: (pause-646953) Calling .GetIP
	I0417 19:44:22.746034  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.746449  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.746477  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.746621  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:22.749209  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.749557  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.749582  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.749718  124898 provision.go:143] copyHostCerts
	I0417 19:44:22.749789  124898 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem, removing ...
	I0417 19:44:22.749801  124898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem
	I0417 19:44:22.749874  124898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/ca.pem (1078 bytes)
	I0417 19:44:22.750024  124898 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem, removing ...
	I0417 19:44:22.750039  124898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem
	I0417 19:44:22.750068  124898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/cert.pem (1123 bytes)
	I0417 19:44:22.750161  124898 exec_runner.go:144] found /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem, removing ...
	I0417 19:44:22.750173  124898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem
	I0417 19:44:22.750202  124898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18665-75973/.minikube/key.pem (1679 bytes)
	I0417 19:44:22.750287  124898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem org=jenkins.pause-646953 san=[127.0.0.1 192.168.61.156 localhost minikube pause-646953]
	I0417 19:44:22.851028  124898 provision.go:177] copyRemoteCerts
	I0417 19:44:22.851089  124898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0417 19:44:22.851116  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:22.854038  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.854424  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:22.854465  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:22.854658  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:22.854884  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:22.855030  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:22.855182  124898 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/pause-646953/id_rsa Username:docker}
	I0417 19:44:22.944001  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0417 19:44:22.975459  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0417 19:44:23.006107  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0417 19:44:23.033871  124898 provision.go:87] duration metric: took 291.694155ms to configureAuth
	I0417 19:44:23.033900  124898 buildroot.go:189] setting minikube options for container-runtime
	I0417 19:44:23.034106  124898 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:44:23.034188  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:23.037092  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:23.037487  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:23.037518  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:23.037670  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:23.037899  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:23.038071  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:23.038255  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:23.038476  124898 main.go:141] libmachine: Using SSH client type: native
	I0417 19:44:23.038693  124898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I0417 19:44:23.038714  124898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0417 19:44:28.625988  124898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0417 19:44:28.626028  124898 machine.go:97] duration metric: took 6.274043438s to provisionDockerMachine
	I0417 19:44:28.626043  124898 start.go:293] postStartSetup for "pause-646953" (driver="kvm2")
	I0417 19:44:28.626060  124898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0417 19:44:28.626087  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:28.626435  124898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0417 19:44:28.626461  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:28.629350  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.629790  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:28.629822  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.629986  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:28.630210  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:28.630449  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:28.630633  124898 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/pause-646953/id_rsa Username:docker}
	I0417 19:44:28.724853  124898 ssh_runner.go:195] Run: cat /etc/os-release
	I0417 19:44:28.730619  124898 info.go:137] Remote host: Buildroot 2023.02.9
	I0417 19:44:28.730655  124898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/addons for local assets ...
	I0417 19:44:28.730737  124898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18665-75973/.minikube/files for local assets ...
	I0417 19:44:28.730844  124898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem -> 832072.pem in /etc/ssl/certs
	I0417 19:44:28.731004  124898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0417 19:44:28.743091  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:44:28.775936  124898 start.go:296] duration metric: took 149.876195ms for postStartSetup
	I0417 19:44:28.775984  124898 fix.go:56] duration metric: took 6.447325582s for fixHost
	I0417 19:44:28.776008  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:28.779221  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.779665  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:28.779712  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.779890  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:28.780087  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:28.780346  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:28.780531  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:28.780761  124898 main.go:141] libmachine: Using SSH client type: native
	I0417 19:44:28.780999  124898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.156 22 <nil> <nil>}
	I0417 19:44:28.781019  124898 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0417 19:44:28.901901  124898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713383068.888758951
	
	I0417 19:44:28.901930  124898 fix.go:216] guest clock: 1713383068.888758951
	I0417 19:44:28.901941  124898 fix.go:229] Guest: 2024-04-17 19:44:28.888758951 +0000 UTC Remote: 2024-04-17 19:44:28.775988022 +0000 UTC m=+6.609868241 (delta=112.770929ms)
	I0417 19:44:28.901969  124898 fix.go:200] guest clock delta is within tolerance: 112.770929ms
	I0417 19:44:28.901976  124898 start.go:83] releasing machines lock for "pause-646953", held for 6.573328597s
	I0417 19:44:28.902001  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:28.902307  124898 main.go:141] libmachine: (pause-646953) Calling .GetIP
	I0417 19:44:28.905226  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.905633  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:28.905683  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.905781  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:28.906376  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:28.906585  124898 main.go:141] libmachine: (pause-646953) Calling .DriverName
	I0417 19:44:28.906675  124898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0417 19:44:28.906727  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:28.906843  124898 ssh_runner.go:195] Run: cat /version.json
	I0417 19:44:28.906871  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHHostname
	I0417 19:44:28.909605  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.909830  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.909984  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:28.910013  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.910172  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:28.910246  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:28.910280  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:28.910335  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:28.910447  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHPort
	I0417 19:44:28.910506  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:28.910616  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHKeyPath
	I0417 19:44:28.910828  124898 main.go:141] libmachine: (pause-646953) Calling .GetSSHUsername
	I0417 19:44:28.910832  124898 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/pause-646953/id_rsa Username:docker}
	I0417 19:44:28.910973  124898 sshutil.go:53] new ssh client: &{IP:192.168.61.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/pause-646953/id_rsa Username:docker}
	I0417 19:44:29.017238  124898 ssh_runner.go:195] Run: systemctl --version
	I0417 19:44:29.023727  124898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0417 19:44:29.187843  124898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0417 19:44:29.194488  124898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0417 19:44:29.194556  124898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0417 19:44:29.205465  124898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0417 19:44:29.205497  124898 start.go:494] detecting cgroup driver to use...
	I0417 19:44:29.205571  124898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0417 19:44:29.224435  124898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0417 19:44:29.241476  124898 docker.go:217] disabling cri-docker service (if available) ...
	I0417 19:44:29.241542  124898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0417 19:44:29.258010  124898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0417 19:44:29.273632  124898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0417 19:44:29.414351  124898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0417 19:44:29.549169  124898 docker.go:233] disabling docker service ...
	I0417 19:44:29.549254  124898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0417 19:44:29.567839  124898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0417 19:44:29.581798  124898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0417 19:44:29.723973  124898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0417 19:44:29.862292  124898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0417 19:44:29.877717  124898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0417 19:44:29.899015  124898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0417 19:44:29.899077  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.910588  124898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0417 19:44:29.910677  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.923468  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.935040  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.946117  124898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0417 19:44:29.957550  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.968396  124898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.980302  124898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0417 19:44:29.991483  124898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0417 19:44:30.001584  124898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0417 19:44:30.011463  124898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:44:30.157086  124898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0417 19:44:36.530725  124898 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.373589466s)
	I0417 19:44:36.530763  124898 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0417 19:44:36.530818  124898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0417 19:44:36.537957  124898 start.go:562] Will wait 60s for crictl version
	I0417 19:44:36.538017  124898 ssh_runner.go:195] Run: which crictl
	I0417 19:44:36.542647  124898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0417 19:44:36.592308  124898 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0417 19:44:36.592394  124898 ssh_runner.go:195] Run: crio --version
	I0417 19:44:36.624924  124898 ssh_runner.go:195] Run: crio --version
	I0417 19:44:36.661477  124898 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0417 19:44:36.663015  124898 main.go:141] libmachine: (pause-646953) Calling .GetIP
	I0417 19:44:36.666321  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:36.666730  124898 main.go:141] libmachine: (pause-646953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:27:1a", ip: ""} in network mk-pause-646953: {Iface:virbr2 ExpiryTime:2024-04-17 20:43:00 +0000 UTC Type:0 Mac:52:54:00:59:27:1a Iaid: IPaddr:192.168.61.156 Prefix:24 Hostname:pause-646953 Clientid:01:52:54:00:59:27:1a}
	I0417 19:44:36.666783  124898 main.go:141] libmachine: (pause-646953) DBG | domain pause-646953 has defined IP address 192.168.61.156 and MAC address 52:54:00:59:27:1a in network mk-pause-646953
	I0417 19:44:36.666974  124898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0417 19:44:36.673085  124898 kubeadm.go:877] updating cluster {Name:pause-646953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
-rc.2 ClusterName:pause-646953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0417 19:44:36.673218  124898 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:44:36.673275  124898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:44:36.727143  124898 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:44:36.727168  124898 crio.go:433] Images already preloaded, skipping extraction
	I0417 19:44:36.727210  124898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:44:36.772136  124898 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:44:36.772161  124898 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:44:36.772170  124898 kubeadm.go:928] updating node { 192.168.61.156 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:44:36.772289  124898 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-646953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:pause-646953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0417 19:44:36.772379  124898 ssh_runner.go:195] Run: crio config
	I0417 19:44:36.832396  124898 cni.go:84] Creating CNI manager for ""
	I0417 19:44:36.832423  124898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 19:44:36.832435  124898 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:44:36.832465  124898 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.156 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-646953 NodeName:pause-646953 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:44:36.832671  124898 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-646953"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:44:36.832752  124898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:44:36.844568  124898 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:44:36.844656  124898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:44:36.857252  124898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0417 19:44:36.878478  124898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:44:36.902009  124898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0417 19:44:36.926358  124898 ssh_runner.go:195] Run: grep 192.168.61.156	control-plane.minikube.internal$ /etc/hosts
	I0417 19:44:36.931521  124898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:44:37.087061  124898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:44:37.108498  124898 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953 for IP: 192.168.61.156
	I0417 19:44:37.108530  124898 certs.go:194] generating shared ca certs ...
	I0417 19:44:37.108553  124898 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:44:37.108764  124898 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:44:37.108862  124898 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:44:37.108888  124898 certs.go:256] generating profile certs ...
	I0417 19:44:37.109019  124898 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/client.key
	I0417 19:44:37.109095  124898 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/apiserver.key.0eb1cc44
	I0417 19:44:37.109150  124898 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/proxy-client.key
	I0417 19:44:37.109303  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:44:37.109344  124898 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:44:37.109359  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:44:37.109392  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:44:37.109423  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:44:37.109457  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:44:37.109511  124898 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:44:37.110355  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:44:37.147493  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:44:37.180407  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:44:37.209078  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:44:37.237718  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 19:44:37.269875  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:44:37.305483  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:44:37.334353  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/pause-646953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0417 19:44:37.361577  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:44:37.386340  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:44:37.414882  124898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:44:37.441946  124898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:44:37.465830  124898 ssh_runner.go:195] Run: openssl version
	I0417 19:44:37.473390  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:44:37.485738  124898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:44:37.490917  124898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:44:37.490973  124898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:44:37.504533  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:44:37.538485  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:44:37.578407  124898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:44:37.594703  124898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:44:37.594782  124898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:44:37.614419  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:44:37.711942  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:44:37.814953  124898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:44:37.875235  124898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:44:37.875304  124898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:44:37.948745  124898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:44:38.015874  124898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:44:38.039647  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0417 19:44:38.048714  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0417 19:44:38.063119  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0417 19:44:38.079981  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0417 19:44:38.144396  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0417 19:44:38.199953  124898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0417 19:44:38.252834  124898 kubeadm.go:391] StartCluster: {Name:pause-646953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc
.2 ClusterName:pause-646953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:44:38.252958  124898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:44:38.253010  124898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:44:38.454062  124898 cri.go:89] found id: "76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea"
	I0417 19:44:38.454090  124898 cri.go:89] found id: "e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7"
	I0417 19:44:38.454096  124898 cri.go:89] found id: "79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9"
	I0417 19:44:38.454102  124898 cri.go:89] found id: "1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74"
	I0417 19:44:38.454107  124898 cri.go:89] found id: "0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4"
	I0417 19:44:38.454111  124898 cri.go:89] found id: "93532abf5543a71df9662122a7db9b1fae9ff246cfb05d19bda111baf0648ae4"
	I0417 19:44:38.454115  124898 cri.go:89] found id: "4dd3577e7d0d2f9feb33f9e89b6f03b4e801c1e9c6a8c0c6ab56cef3f06870b9"
	I0417 19:44:38.454119  124898 cri.go:89] found id: "a9c86e61d693ade1e7c191ed9c66c894adba5ce3c9cdc1771df686a3d6bfa369"
	I0417 19:44:38.454123  124898 cri.go:89] found id: "520eaac97afd54ef7bda9c5c864f9d30c87a0f97cb9fb3b9f9af3e909cf2e509"
	I0417 19:44:38.454130  124898 cri.go:89] found id: "c2448639663fa1f4c6bef5a927b34fc4e2a3328a8a8721e776a86451369c5b85"
	I0417 19:44:38.454134  124898 cri.go:89] found id: ""
	I0417 19:44:38.454180  124898 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-646953 -n pause-646953
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-646953 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-646953 logs -n 25: (1.436046594s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl status kubelet --all                       |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat kubelet                                |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat docker                                 |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo docker                        | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | system info                                          |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat cri-docker                             |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | cri-dockerd --version                                |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status containerd                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat containerd                             |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | containerd config dump                               |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl status crio --all                          |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo find                          | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo crio                          | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | config                                               |                       |         |                |                     |                     |
	| delete  | -p kindnet-450558                                    | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	| start   | -p custom-flannel-450558                             | custom-flannel-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |                |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |                |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |                |                     |                     |
	|         | --driver=kvm2                                        |                       |         |                |                     |                     |
	|         | --container-runtime=crio                             |                       |         |                |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:45:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:45:13.746174  128079 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:45:13.746309  128079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:45:13.746324  128079 out.go:304] Setting ErrFile to fd 2...
	I0417 19:45:13.746332  128079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:45:13.746657  128079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:45:13.747516  128079 out.go:298] Setting JSON to false
	I0417 19:45:13.749405  128079 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12462,"bootTime":1713370652,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:45:13.749548  128079 start.go:139] virtualization: kvm guest
	I0417 19:45:13.835925  128079 out.go:177] * [custom-flannel-450558] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:45:13.980650  128079 notify.go:220] Checking for updates...
	I0417 19:45:14.095552  128079 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:45:14.110976  128079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:45:14.193914  128079 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:45:14.316834  128079 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:45:12.466433  126291 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.850561377s)
	I0417 19:45:12.466474  126291 crio.go:469] duration metric: took 2.850693904s to extract the tarball
	I0417 19:45:12.466486  126291 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 19:45:12.507156  126291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:45:12.555373  126291 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:45:12.555397  126291 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:45:12.555405  126291 kubeadm.go:928] updating node { 192.168.50.12 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:45:12.555505  126291 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-450558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:calico-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0417 19:45:12.555567  126291 ssh_runner.go:195] Run: crio config
	I0417 19:45:12.610249  126291 cni.go:84] Creating CNI manager for "calico"
	I0417 19:45:12.610281  126291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:45:12.610314  126291 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.12 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-450558 NodeName:calico-450558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:45:12.610494  126291 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-450558"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:45:12.610564  126291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:45:12.622689  126291 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:45:12.622752  126291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:45:12.634211  126291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0417 19:45:12.655022  126291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:45:12.682353  126291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0417 19:45:12.705157  126291 ssh_runner.go:195] Run: grep 192.168.50.12	control-plane.minikube.internal$ /etc/hosts
	I0417 19:45:12.711107  126291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:45:12.729060  126291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:45:12.854230  126291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:45:12.875732  126291 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558 for IP: 192.168.50.12
	I0417 19:45:12.875759  126291 certs.go:194] generating shared ca certs ...
	I0417 19:45:12.875779  126291 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.875953  126291 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:45:12.876020  126291 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:45:12.876038  126291 certs.go:256] generating profile certs ...
	I0417 19:45:12.876115  126291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key
	I0417 19:45:12.876146  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt with IP's: []
	I0417 19:45:12.990369  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt ...
	I0417 19:45:12.990400  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt: {Name:mk005ac5a5a37a9f80cf82c5b80f4d0942d05f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.990591  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key ...
	I0417 19:45:12.990613  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key: {Name:mk51a2b07c04251dec4aa36ceacb335ebab56fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.990723  126291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401
	I0417 19:45:12.990746  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.12]
	I0417 19:45:13.136716  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 ...
	I0417 19:45:13.136747  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401: {Name:mkd852c1af45888fb6d57d5ac625d75ed407b00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.136940  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401 ...
	I0417 19:45:13.136959  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401: {Name:mk15f10bd3142caa117463d65ea09321dea4df45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.137081  126291 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt
	I0417 19:45:13.137174  126291 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key
	I0417 19:45:13.137229  126291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key
	I0417 19:45:13.137245  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt with IP's: []
	I0417 19:45:13.281098  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt ...
	I0417 19:45:13.281128  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt: {Name:mk16e0a0a8c483281e183044f6a8d5dcaa8f6454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.281303  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key ...
	I0417 19:45:13.281318  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key: {Name:mk0cc19dc89a110f06e530290c8f3a2805d07250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.281520  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:45:13.281557  126291 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:45:13.281567  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:45:13.281594  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:45:13.281615  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:45:13.281643  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:45:13.281686  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:45:13.282252  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:45:13.311871  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:45:13.340877  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:45:13.367893  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:45:13.395863  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 19:45:13.425720  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:45:13.455409  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:45:13.483577  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0417 19:45:13.512857  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:45:13.540302  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:45:13.580463  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:45:13.605236  126291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:45:13.626452  126291 ssh_runner.go:195] Run: openssl version
	I0417 19:45:13.632908  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:45:13.645961  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.651153  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.651219  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.657372  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:45:13.669628  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:45:13.684251  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.689719  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.689782  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.696075  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:45:13.709195  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:45:13.722790  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.728473  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.728610  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.735471  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:45:13.750244  126291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:45:13.754878  126291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 19:45:13.754929  126291 kubeadm.go:391] StartCluster: {Name:calico-450558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-r
c.2 ClusterName:calico-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:45:13.755020  126291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:45:13.755083  126291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:45:13.803045  126291 cri.go:89] found id: ""
	I0417 19:45:13.803125  126291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 19:45:13.814595  126291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:45:13.825273  126291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:45:13.837863  126291 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:45:13.837885  126291 kubeadm.go:156] found existing configuration files:
	
	I0417 19:45:13.837940  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:45:13.848164  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:45:13.848220  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:45:13.859096  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:45:13.869961  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:45:13.870037  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:45:13.880632  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:45:13.890644  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:45:13.890735  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:45:13.901936  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:45:13.912389  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:45:13.912461  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:45:13.923838  126291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 19:45:13.984201  126291 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 19:45:13.984338  126291 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:45:14.119332  126291 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:45:14.119467  126291 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:45:14.119614  126291 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:45:14.376528  126291 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:45:14.490783  128079 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:45:14.607675  128079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:45:14.628432  128079 config.go:182] Loaded profile config "calico-450558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.628622  128079 config.go:182] Loaded profile config "kubernetes-upgrade-365550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.628858  128079 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.629004  128079 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:45:14.667214  128079 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 19:45:14.668540  128079 start.go:297] selected driver: kvm2
	I0417 19:45:14.668557  128079 start.go:901] validating driver "kvm2" against <nil>
	I0417 19:45:14.668574  128079 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:45:14.669388  128079 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:45:14.669486  128079 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:45:14.686852  128079 install.go:137] /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:45:14.686910  128079 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:45:14.687231  128079 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:45:14.687315  128079 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0417 19:45:14.687342  128079 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0417 19:45:14.687416  128079 start.go:340] cluster config:
	{Name:custom-flannel-450558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:custom-flannel-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:45:14.687533  128079 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:45:14.689563  128079 out.go:177] * Starting "custom-flannel-450558" primary control-plane node in "custom-flannel-450558" cluster
	I0417 19:45:10.635142  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:10.635725  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:10.635758  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:10.635668  127260 retry.go:31] will retry after 1.045237031s: waiting for machine to come up
	I0417 19:45:11.682317  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:11.682822  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:11.682855  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:11.682808  127260 retry.go:31] will retry after 1.273142266s: waiting for machine to come up
	I0417 19:45:13.087127  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:13.087655  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:13.087688  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:13.087597  127260 retry.go:31] will retry after 1.137015199s: waiting for machine to come up
	I0417 19:45:14.226274  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:14.227003  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:14.227037  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:14.226955  127260 retry.go:31] will retry after 1.508439303s: waiting for machine to come up
	I0417 19:45:14.490755  126291 out.go:204]   - Generating certificates and keys ...
	I0417 19:45:14.490908  126291 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:45:14.491022  126291 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:45:14.514454  126291 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 19:45:14.641385  126291 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 19:45:14.785918  126291 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 19:45:14.941821  126291 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 19:45:15.240188  126291 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 19:45:15.240387  126291 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-450558 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I0417 19:45:15.351493  126291 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 19:45:15.351852  126291 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-450558 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I0417 19:45:15.410048  126291 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 19:45:15.577060  126291 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 19:45:15.758418  126291 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 19:45:15.758702  126291 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:45:15.946009  126291 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:45:16.105543  126291 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 19:45:16.218848  126291 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:45:16.368011  126291 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:45:16.529800  126291 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:45:16.530769  126291 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:45:16.534968  126291 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:45:14.233703  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:16.319978  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:14.691067  128079 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:45:14.691120  128079 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:45:14.691140  128079 cache.go:56] Caching tarball of preloaded images
	I0417 19:45:14.691272  128079 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:45:14.691285  128079 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:45:14.691419  128079 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/config.json ...
	I0417 19:45:14.691442  128079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/config.json: {Name:mk89d01c6f9ae3e421c30740631cfd3eeff8b840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:14.691605  128079 start.go:360] acquireMachinesLock for custom-flannel-450558: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:45:18.320650  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:18.820697  124898 pod_ready.go:92] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.820737  124898 pod_ready.go:81] duration metric: took 11.508740323s for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.820750  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.826870  124898 pod_ready.go:92] pod "kube-controller-manager-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.826896  124898 pod_ready.go:81] duration metric: took 6.136523ms for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.826910  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.833641  124898 pod_ready.go:92] pod "kube-proxy-w9mzs" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.833731  124898 pod_ready.go:81] duration metric: took 6.803774ms for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.833759  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.841148  124898 pod_ready.go:92] pod "kube-scheduler-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.841174  124898 pod_ready.go:81] duration metric: took 7.383796ms for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.841184  124898 pod_ready.go:38] duration metric: took 13.051412994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:18.841229  124898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 19:45:18.855307  124898 ops.go:34] apiserver oom_adj: -16
	I0417 19:45:18.855337  124898 kubeadm.go:591] duration metric: took 40.282773589s to restartPrimaryControlPlane
	I0417 19:45:18.855349  124898 kubeadm.go:393] duration metric: took 40.602524348s to StartCluster
	I0417 19:45:18.855371  124898 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:18.855458  124898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:45:18.856374  124898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:18.856679  124898 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:45:18.858358  124898 out.go:177] * Verifying Kubernetes components...
	I0417 19:45:18.856809  124898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0417 19:45:18.856991  124898 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:18.862416  124898 out.go:177] * Enabled addons: 
	I0417 19:45:15.737500  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:15.738127  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:15.738156  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:15.738075  127260 retry.go:31] will retry after 2.611198893s: waiting for machine to come up
	I0417 19:45:18.352177  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:18.352793  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:18.352828  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:18.352747  127260 retry.go:31] will retry after 3.3267745s: waiting for machine to come up
	I0417 19:45:16.536958  126291 out.go:204]   - Booting up control plane ...
	I0417 19:45:16.537075  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:45:16.537188  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:45:16.537309  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:45:16.553139  126291 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:45:16.554065  126291 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:45:16.554157  126291 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:45:16.696003  126291 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 19:45:16.696130  126291 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 19:45:17.197794  126291 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.24812ms
	I0417 19:45:17.197909  126291 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 19:45:18.860119  124898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:45:18.863921  124898 addons.go:505] duration metric: took 7.157694ms for enable addons: enabled=[]
	I0417 19:45:19.094721  124898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:45:19.114864  124898 node_ready.go:35] waiting up to 6m0s for node "pause-646953" to be "Ready" ...
	I0417 19:45:19.119485  124898 node_ready.go:49] node "pause-646953" has status "Ready":"True"
	I0417 19:45:19.119512  124898 node_ready.go:38] duration metric: took 4.601149ms for node "pause-646953" to be "Ready" ...
	I0417 19:45:19.119524  124898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:19.126630  124898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.217045  124898 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:19.217079  124898 pod_ready.go:81] duration metric: took 90.415368ms for pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.217091  124898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.617768  124898 pod_ready.go:92] pod "etcd-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:19.617798  124898 pod_ready.go:81] duration metric: took 400.698195ms for pod "etcd-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.617810  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.016609  124898 pod_ready.go:92] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.016633  124898 pod_ready.go:81] duration metric: took 398.81534ms for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.016644  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.417144  124898 pod_ready.go:92] pod "kube-controller-manager-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.417170  124898 pod_ready.go:81] duration metric: took 400.51881ms for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.417185  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.817291  124898 pod_ready.go:92] pod "kube-proxy-w9mzs" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.817317  124898 pod_ready.go:81] duration metric: took 400.125061ms for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.817327  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:21.216898  124898 pod_ready.go:92] pod "kube-scheduler-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:21.216925  124898 pod_ready.go:81] duration metric: took 399.591159ms for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:21.216936  124898 pod_ready.go:38] duration metric: took 2.097399721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:21.216962  124898 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:45:21.217033  124898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:45:21.231787  124898 api_server.go:72] duration metric: took 2.375066625s to wait for apiserver process to appear ...
	I0417 19:45:21.231818  124898 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:45:21.231841  124898 api_server.go:253] Checking apiserver healthz at https://192.168.61.156:8443/healthz ...
	I0417 19:45:21.237395  124898 api_server.go:279] https://192.168.61.156:8443/healthz returned 200:
	ok
	I0417 19:45:21.238727  124898 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 19:45:21.238750  124898 api_server.go:131] duration metric: took 6.925029ms to wait for apiserver health ...
	I0417 19:45:21.238765  124898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:45:21.419064  124898 system_pods.go:59] 6 kube-system pods found
	I0417 19:45:21.419091  124898 system_pods.go:61] "coredns-7db6d8ff4d-qrd57" [de016395-52a7-4569-945d-dc21c63529c1] Running
	I0417 19:45:21.419095  124898 system_pods.go:61] "etcd-pause-646953" [7678dd72-ab0d-4fe8-b612-89a207c7fd46] Running
	I0417 19:45:21.419099  124898 system_pods.go:61] "kube-apiserver-pause-646953" [df5e6bfd-761e-4f49-a8ad-7d988a44f51d] Running
	I0417 19:45:21.419104  124898 system_pods.go:61] "kube-controller-manager-pause-646953" [07582c8b-da03-4a06-a628-0d023365d3c6] Running
	I0417 19:45:21.419108  124898 system_pods.go:61] "kube-proxy-w9mzs" [3eec3404-cdc2-4158-88b9-df4e4203f290] Running
	I0417 19:45:21.419111  124898 system_pods.go:61] "kube-scheduler-pause-646953" [de17acd1-919a-479d-ac56-bde1c4f78406] Running
	I0417 19:45:21.419117  124898 system_pods.go:74] duration metric: took 180.346079ms to wait for pod list to return data ...
	I0417 19:45:21.419123  124898 default_sa.go:34] waiting for default service account to be created ...
	I0417 19:45:21.617499  124898 default_sa.go:45] found service account: "default"
	I0417 19:45:21.617535  124898 default_sa.go:55] duration metric: took 198.404037ms for default service account to be created ...
	I0417 19:45:21.617554  124898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 19:45:21.819662  124898 system_pods.go:86] 6 kube-system pods found
	I0417 19:45:21.819691  124898 system_pods.go:89] "coredns-7db6d8ff4d-qrd57" [de016395-52a7-4569-945d-dc21c63529c1] Running
	I0417 19:45:21.819697  124898 system_pods.go:89] "etcd-pause-646953" [7678dd72-ab0d-4fe8-b612-89a207c7fd46] Running
	I0417 19:45:21.819701  124898 system_pods.go:89] "kube-apiserver-pause-646953" [df5e6bfd-761e-4f49-a8ad-7d988a44f51d] Running
	I0417 19:45:21.819705  124898 system_pods.go:89] "kube-controller-manager-pause-646953" [07582c8b-da03-4a06-a628-0d023365d3c6] Running
	I0417 19:45:21.819709  124898 system_pods.go:89] "kube-proxy-w9mzs" [3eec3404-cdc2-4158-88b9-df4e4203f290] Running
	I0417 19:45:21.819712  124898 system_pods.go:89] "kube-scheduler-pause-646953" [de17acd1-919a-479d-ac56-bde1c4f78406] Running
	I0417 19:45:21.819721  124898 system_pods.go:126] duration metric: took 202.159457ms to wait for k8s-apps to be running ...
	I0417 19:45:21.819730  124898 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 19:45:21.819783  124898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:45:21.838369  124898 system_svc.go:56] duration metric: took 18.624253ms WaitForService to wait for kubelet
	I0417 19:45:21.838404  124898 kubeadm.go:576] duration metric: took 2.981691937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:45:21.838442  124898 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:45:22.016815  124898 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 19:45:22.016847  124898 node_conditions.go:123] node cpu capacity is 2
	I0417 19:45:22.016859  124898 node_conditions.go:105] duration metric: took 178.407477ms to run NodePressure ...
	I0417 19:45:22.016877  124898 start.go:240] waiting for startup goroutines ...
	I0417 19:45:22.016887  124898 start.go:245] waiting for cluster config update ...
	I0417 19:45:22.016896  124898 start.go:254] writing updated cluster config ...
	I0417 19:45:22.017225  124898 ssh_runner.go:195] Run: rm -f paused
	I0417 19:45:22.072386  124898 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 19:45:22.074010  124898 out.go:177] * Done! kubectl is now configured to use "pause-646953" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.820855015Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrd57,Uid:de016395-52a7-4569-945d-dc21c63529c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077815338280,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:43:40.088306538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-646953,Uid:77fe5231f0f485a3b159d32d2c47ceda,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077651900596,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77fe5231f0f485a3b159d32d2c47ceda,kubernetes.io/config.seen: 2024-04-17T19:43:26.086740844Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-646953,Uid:e36f35028cccbcd2d15942038a695af3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077613756298,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccb
cd2d15942038a695af3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e36f35028cccbcd2d15942038a695af3,kubernetes.io/config.seen: 2024-04-17T19:43:26.086739822Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&PodSandboxMetadata{Name:etcd-pause-646953,Uid:148dc06994f8c7b5b9f6353a4eca512b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077599408286,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.156:2379,kubernetes.io/config.hash: 148dc06994f8c7b5b9f6353a4eca512b,kubernetes.io/config.seen: 2024-04-17T19:43:26.086741846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-646953,Uid:93b87bf1dac16be7cd1f2193d3029fa7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077569478883,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.156:8443,kubernetes.io/config.hash: 93b87bf1dac16be7cd1f2193d3029fa7,kubernetes.io/config.seen: 2024-04-17T19:43:26.086736175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&PodSandboxMetadata{Name:kube-proxy-w9mzs,Uid:3eec3404-cdc2-4158-88b9-df4e4203f290,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1713383077541795559,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:43:39.934636510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c3275984-a33c-48f2-a631-0eb7c2761da0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.821965464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffdf436d-4b20-467b-92ea-1e51a751f30c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.822068753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffdf436d-4b20-467b-92ea-1e51a751f30c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.822432624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffdf436d-4b20-467b-92ea-1e51a751f30c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.823885939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf326cb9-957c-48be-962a-75328bef5a72 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.823943794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf326cb9-957c-48be-962a-75328bef5a72 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.825143547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40645ea5-cc0f-4b6b-b766-94989a98d4eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.825619890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383122825597710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40645ea5-cc0f-4b6b-b766-94989a98d4eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.826217553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=383d13a9-be47-493d-b023-a33f016b5ce0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.826362578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=383d13a9-be47-493d-b023-a33f016b5ce0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.826666706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=383d13a9-be47-493d-b023-a33f016b5ce0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.869547473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c74f67b6-c89f-4948-8061-595ab86b1377 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.869620911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c74f67b6-c89f-4948-8061-595ab86b1377 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.871122970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7e52a24-25de-408d-ba5b-5ecb98061a13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.871676091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383122871648762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7e52a24-25de-408d-ba5b-5ecb98061a13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.872333429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00171fee-7286-4adb-9cea-66587a42ce26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.872385265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00171fee-7286-4adb-9cea-66587a42ce26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.872612067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00171fee-7286-4adb-9cea-66587a42ce26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.917649180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9faaf771-809b-45cc-a38d-179c53c01610 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.917723290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9faaf771-809b-45cc-a38d-179c53c01610 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.919421815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf3c4048-1fd6-47e1-a9a3-1218290ebd39 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.919957496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383122919927560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf3c4048-1fd6-47e1-a9a3-1218290ebd39 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.920719153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95aa0482-7d82-46e7-93e0-25182efc3904 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.920772702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95aa0482-7d82-46e7-93e0-25182efc3904 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:22 pause-646953 crio[2440]: time="2024-04-17 19:45:22.921025394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95aa0482-7d82-46e7-93e0-25182efc3904 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ed7515695bd57       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   17 seconds ago       Running             kube-proxy                2                   2ca3182af06de       kube-proxy-w9mzs
	f19a30628afd0       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   21 seconds ago       Running             kube-controller-manager   2                   c54196a452145       kube-controller-manager-pause-646953
	0b3e088056d8e       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   21 seconds ago       Running             kube-apiserver            2                   1b7852dacf760       kube-apiserver-pause-646953
	e0a1e6dd8fc61       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   21 seconds ago       Running             kube-scheduler            2                   1f09d2ea78252       kube-scheduler-pause-646953
	ca53906416edd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   33 seconds ago       Running             etcd                      2                   f442bef04d830       etcd-pause-646953
	e12360ee1bcc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   44 seconds ago       Running             coredns                   1                   2045deaa871b2       coredns-7db6d8ff4d-qrd57
	618429c1fd685       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   44 seconds ago       Exited              kube-controller-manager   1                   c54196a452145       kube-controller-manager-pause-646953
	76445264e28d7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   44 seconds ago       Exited              etcd                      1                   f442bef04d830       etcd-pause-646953
	e8370f2f797f7       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   45 seconds ago       Exited              kube-scheduler            1                   1f09d2ea78252       kube-scheduler-pause-646953
	79a0007ae410b       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   45 seconds ago       Exited              kube-apiserver            1                   1b7852dacf760       kube-apiserver-pause-646953
	1a9049c5a40d7       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   45 seconds ago       Exited              kube-proxy                1                   2ca3182af06de       kube-proxy-w9mzs
	0d41c03af85dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   c191b422a8899       coredns-7db6d8ff4d-qrd57
	
	
	==> coredns [0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[964633467]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.796) (total time: 29056ms):
	Trace[964633467]: ---"Objects listed" error:<nil> 29056ms (19:44:10.853)
	Trace[964633467]: [29.05698192s] [29.05698192s] END
	[INFO] plugin/kubernetes: Trace[1286549125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.796) (total time: 29058ms):
	Trace[1286549125]: ---"Objects listed" error:<nil> 29057ms (19:44:10.854)
	Trace[1286549125]: [29.058000718s] [29.058000718s] END
	[INFO] plugin/kubernetes: Trace[1957297417]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.799) (total time: 29055ms):
	Trace[1957297417]: ---"Objects listed" error:<nil> 29055ms (19:44:10.855)
	Trace[1957297417]: [29.055889272s] [29.055889272s] END
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43252 - 48434 "HINFO IN 8550402545300384940.1506769839142304122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020789501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44004 - 56259 "HINFO IN 7933819828100252723.1763484357391934585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050024402s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[284303994]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.194) (total time: 10002ms):
	Trace[284303994]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:44:49.196)
	Trace[284303994]: [10.002028079s] [10.002028079s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1080710972]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.196) (total time: 10001ms):
	Trace[1080710972]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:44:49.196)
	Trace[1080710972]: [10.001000918s] [10.001000918s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[270417557]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.195) (total time: 10001ms):
	Trace[270417557]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:44:49.196)
	Trace[270417557]: [10.001524932s] [10.001524932s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-646953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-646953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=pause-646953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_43_26_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:43:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-646953
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:45:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.156
	  Hostname:    pause-646953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 022aaa8a557641dbb2d3d59ba114cd51
	  System UUID:                022aaa8a-5576-41db-b2d3-d59ba114cd51
	  Boot ID:                    34a371e5-ecd5-445a-affb-c069b1debdc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-qrd57                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     103s
	  kube-system                 etcd-pause-646953                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-pause-646953             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-pause-646953    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-w9mzs                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-646953             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 26s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m4s)  kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m4s)  kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m4s)  kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeReady                116s                 kubelet          Node pause-646953 status is now: NodeReady
	  Normal  RegisteredNode           105s                 node-controller  Node pause-646953 event: Registered Node pause-646953 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node pause-646953 event: Registered Node pause-646953 in Controller
	
	
	==> dmesg <==
	[  +0.058513] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062749] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.189171] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.150626] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.280603] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.558352] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.058650] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.597739] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.441843] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.149120] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.081453] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.375375] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.012974] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[ +11.559884] kauditd_printk_skb: 88 callbacks suppressed
	[Apr17 19:44] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.131022] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +0.168595] systemd-fstab-generator[2384]: Ignoring "noauto" option for root device
	[  +0.140326] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.284497] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +6.927176] systemd-fstab-generator[2553]: Ignoring "noauto" option for root device
	[  +0.091065] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.262432] kauditd_printk_skb: 87 callbacks suppressed
	[ +11.135827] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[Apr17 19:45] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.804925] systemd-fstab-generator[3777]: Ignoring "noauto" option for root device
	
	
	==> etcd [76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea] <==
	{"level":"info","ts":"2024-04-17T19:44:38.595648Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"70.11274ms"}
	{"level":"info","ts":"2024-04-17T19:44:38.626981Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-17T19:44:38.649875Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","commit-index":463}
	{"level":"info","ts":"2024-04-17T19:44:38.650058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-17T19:44:38.650133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became follower at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:38.650183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d4d2bfcafeea25a0 [peers: [], term: 2, commit: 463, applied: 0, lastindex: 463, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-17T19:44:38.659477Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-17T19:44:38.697755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":443}
	{"level":"info","ts":"2024-04-17T19:44:38.707673Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-17T19:44:38.717426Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d4d2bfcafeea25a0","timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:44:38.717814Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d4d2bfcafeea25a0"}
	{"level":"info","ts":"2024-04-17T19:44:38.717884Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"d4d2bfcafeea25a0","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-17T19:44:38.718131Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-17T19:44:38.724336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724414Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724427Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 switched to configuration voters=(15335530559731017120)"}
	{"level":"info","ts":"2024-04-17T19:44:38.724744Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","added-peer-id":"d4d2bfcafeea25a0","added-peer-peer-urls":["https://192.168.61.156:2380"]}
	{"level":"info","ts":"2024-04-17T19:44:38.72484Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:44:38.724886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:44:38.751437Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:44:38.751639Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:38.751796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:38.758327Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4d2bfcafeea25a0","initial-advertise-peer-urls":["https://192.168.61.156:2380"],"listen-peer-urls":["https://192.168.61.156:2380"],"advertise-client-urls":["https://192.168.61.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-17T19:44:38.758391Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff] <==
	{"level":"info","ts":"2024-04-17T19:44:49.439637Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:51.022846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.022958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.023024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 received MsgPreVoteResp from d4d2bfcafeea25a0 at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.023061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became candidate at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 received MsgVoteResp from d4d2bfcafeea25a0 at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became leader at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4d2bfcafeea25a0 elected leader d4d2bfcafeea25a0 at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.029412Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4d2bfcafeea25a0","local-member-attributes":"{Name:pause-646953 ClientURLs:[https://192.168.61.156:2379]}","request-path":"/0/members/d4d2bfcafeea25a0/attributes","cluster-id":"d4c5be1f342e7586","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:44:51.029428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:44:51.029594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:44:51.029633Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:44:51.029509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:44:51.031402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.156:2379"}
	{"level":"info","ts":"2024-04-17T19:44:51.031608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T19:45:13.398175Z","caller":"traceutil/trace.go:171","msg":"trace[1491267071] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"258.183909ms","start":"2024-04-17T19:45:13.13993Z","end":"2024-04-17T19:45:13.398114Z","steps":["trace[1491267071] 'process raft request'  (duration: 257.739749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:45:14.212553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.556313ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2711324126771220590 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" mod_revision:505 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" value_size:6315 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:45:14.212905Z","caller":"traceutil/trace.go:171","msg":"trace[1857233745] linearizableReadLoop","detail":"{readStateIndex:545; appliedIndex:544; }","duration":"412.31772ms","start":"2024-04-17T19:45:13.800571Z","end":"2024-04-17T19:45:14.212888Z","steps":["trace[1857233745] 'read index received'  (duration: 22.407338ms)","trace[1857233745] 'applied index is now lower than readState.Index'  (duration: 389.908685ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:45:14.213029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"412.451597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-646953\" ","response":"range_response_count:1 size:7010"}
	{"level":"info","ts":"2024-04-17T19:45:14.21308Z","caller":"traceutil/trace.go:171","msg":"trace[990864836] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-646953; range_end:; response_count:1; response_revision:506; }","duration":"412.536305ms","start":"2024-04-17T19:45:13.800535Z","end":"2024-04-17T19:45:14.213071Z","steps":["trace[990864836] 'agreement among raft nodes before linearized reading'  (duration: 412.443317ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:45:14.21315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:45:13.800518Z","time spent":"412.614429ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7034,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-646953\" "}
	{"level":"info","ts":"2024-04-17T19:45:14.213529Z","caller":"traceutil/trace.go:171","msg":"trace[911604733] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"791.828314ms","start":"2024-04-17T19:45:13.421686Z","end":"2024-04-17T19:45:14.213514Z","steps":["trace[911604733] 'process raft request'  (duration: 401.259604ms)","trace[911604733] 'compare'  (duration: 388.166269ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:45:14.213672Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:45:13.421668Z","time spent":"791.96155ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6386,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" mod_revision:505 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" value_size:6315 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" > >"}
	{"level":"info","ts":"2024-04-17T19:45:14.615722Z","caller":"traceutil/trace.go:171","msg":"trace[1794128587] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"159.693693ms","start":"2024-04-17T19:45:14.456009Z","end":"2024-04-17T19:45:14.615703Z","steps":["trace[1794128587] 'process raft request'  (duration: 157.736335ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:45:15.635178Z","caller":"traceutil/trace.go:171","msg":"trace[95993324] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"159.093782ms","start":"2024-04-17T19:45:15.476059Z","end":"2024-04-17T19:45:15.635152Z","steps":["trace[95993324] 'process raft request'  (duration: 65.096496ms)","trace[95993324] 'compare'  (duration: 93.750914ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:45:23 up 2 min,  0 users,  load average: 1.42, 0.50, 0.18
	Linux pause-646953 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a] <==
	I0417 19:45:04.178932       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:45:04.179633       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:45:04.179698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:45:04.181289       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:45:04.181375       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:45:04.181423       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:45:04.181450       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:45:04.181472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:45:04.181493       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:45:04.187339       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:45:04.189385       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:45:04.888633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0417 19:45:05.434185       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.156]
	I0417 19:45:05.441209       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:45:05.467156       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0417 19:45:05.611103       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 19:45:05.635062       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 19:45:05.700191       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 19:45:05.748770       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:45:05.757540       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 19:45:14.215542       1 trace.go:236] Trace[1507521914]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e39bf3d5-e1bd-482c-bf93-edf722f694d9,client:192.168.61.156,api-group:,api-version:v1,name:kube-controller-manager-pause-646953,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-646953/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/22140b6,verb:PATCH (17-Apr-2024 19:45:13.412) (total time: 803ms):
	Trace[1507521914]: ["GuaranteedUpdate etcd3" audit-id:e39bf3d5-e1bd-482c-bf93-edf722f694d9,key:/pods/kube-system/kube-controller-manager-pause-646953,type:*core.Pod,resource:pods 803ms (19:45:13.412)
	Trace[1507521914]:  ---"Txn call completed" 795ms (19:45:14.214)]
	Trace[1507521914]: ---"Object stored in database" 796ms (19:45:14.214)
	Trace[1507521914]: [803.255906ms] [803.255906ms] END
	
	
	==> kube-apiserver [79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9] <==
	I0417 19:44:56.294874       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0417 19:44:56.295446       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0417 19:44:56.295597       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0417 19:44:56.295608       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0417 19:44:56.295628       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0417 19:44:56.295694       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0417 19:44:56.295778       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:44:56.296944       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:44:56.297319       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:44:56.299886       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0417 19:44:56.299960       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0417 19:44:56.299996       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0417 19:44:56.303371       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:44:56.312893       1 controller.go:157] Shutting down quota evaluator
	I0417 19:44:56.312927       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313035       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313043       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313049       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313053       1 controller.go:176] quota evaluator worker shutdown
	E0417 19:44:57.021605       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:57.022829       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0417 19:44:58.021309       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:58.022656       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0417 19:44:59.021995       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:59.023479       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-controller-manager [618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4] <==
	
	
	==> kube-controller-manager [f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd] <==
	I0417 19:45:17.185305       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0417 19:45:17.207602       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0417 19:45:17.209933       1 shared_informer.go:320] Caches are synced for HPA
	I0417 19:45:17.211118       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0417 19:45:17.212218       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0417 19:45:17.213589       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0417 19:45:17.213695       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0417 19:45:17.213697       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0417 19:45:17.213715       1 shared_informer.go:320] Caches are synced for daemon sets
	I0417 19:45:17.222121       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0417 19:45:17.298851       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0417 19:45:17.299055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="129.394µs"
	I0417 19:45:17.330377       1 shared_informer.go:320] Caches are synced for disruption
	I0417 19:45:17.342801       1 shared_informer.go:320] Caches are synced for deployment
	I0417 19:45:17.352763       1 shared_informer.go:320] Caches are synced for attach detach
	I0417 19:45:17.357919       1 shared_informer.go:320] Caches are synced for persistent volume
	I0417 19:45:17.370592       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:45:17.382087       1 shared_informer.go:320] Caches are synced for expand
	I0417 19:45:17.399892       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:45:17.410430       1 shared_informer.go:320] Caches are synced for ephemeral
	I0417 19:45:17.433048       1 shared_informer.go:320] Caches are synced for stateful set
	I0417 19:45:17.439356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0417 19:45:17.830792       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:45:17.893144       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:45:17.893206       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74] <==
	I0417 19:44:39.160859       1 server_linux.go:69] "Using iptables proxy"
	E0417 19:44:49.177688       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-646953\": net/http: TLS handshake timeout"
	I0417 19:44:56.213345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.156"]
	I0417 19:44:56.271074       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:44:56.271184       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:44:56.271222       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:44:56.276816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:44:56.277107       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:44:56.277301       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:44:56.281590       1 config.go:192] "Starting service config controller"
	I0417 19:44:56.281730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:44:56.282213       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:44:56.282371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:44:56.287891       1 config.go:319] "Starting node config controller"
	I0417 19:44:56.287925       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:44:56.382735       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:44:56.382879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:44:56.388036       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5] <==
	I0417 19:45:05.228576       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:45:05.259971       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.156"]
	I0417 19:45:05.316945       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:45:05.317080       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:45:05.317124       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:45:05.323432       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:45:05.323918       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:45:05.324000       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:45:05.325165       1 config.go:192] "Starting service config controller"
	I0417 19:45:05.325211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:45:05.325369       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:45:05.325476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:45:05.325378       1 config.go:319] "Starting node config controller"
	I0417 19:45:05.326049       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:45:05.427818       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:45:05.428066       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:45:05.428177       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c] <==
	I0417 19:45:04.056095       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:45:04.058470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0417 19:45:04.088737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.088845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.088973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0417 19:45:04.089014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0417 19:45:04.089091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:45:04.089122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:45:04.089214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.089337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.089424       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:45:04.089456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:45:04.089562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 19:45:04.089618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 19:45:04.091405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0417 19:45:04.091484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0417 19:45:04.091713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 19:45:04.091769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 19:45:04.091872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.092500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.092046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:45:04.092606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:45:04.092101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:45:04.092661       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0417 19:45:04.159359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7] <==
	I0417 19:44:39.528782       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945107    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/148dc06994f8c7b5b9f6353a4eca512b-etcd-data\") pod \"etcd-pause-646953\" (UID: \"148dc06994f8c7b5b9f6353a4eca512b\") " pod="kube-system/etcd-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945122    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-flexvolume-dir\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945139    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-k8s-certs\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945156    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945171    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77fe5231f0f485a3b159d32d2c47ceda-kubeconfig\") pod \"kube-scheduler-pause-646953\" (UID: \"77fe5231f0f485a3b159d32d2c47ceda\") " pod="kube-system/kube-scheduler-pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.045649    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.046649    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.156:8443: connect: connection refused" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.220607    3430 scope.go:117] "RemoveContainer" containerID="e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.225791    3430 scope.go:117] "RemoveContainer" containerID="79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.227334    3430 scope.go:117] "RemoveContainer" containerID="618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.344446    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-646953?timeout=10s\": dial tcp 192.168.61.156:8443: connect: connection refused" interval="800ms"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.448764    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.450440    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.156:8443: connect: connection refused" node="pause-646953"
	Apr 17 19:45:02 pause-646953 kubelet[3430]: I0417 19:45:02.252016    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.196472    3430 kubelet_node_status.go:112] "Node was previously registered" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.197008    3430 kubelet_node_status.go:76] "Successfully registered node" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.200618    3430 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.203705    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.713542    3430 apiserver.go:52] "Watching apiserver"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.716874    3430 topology_manager.go:215] "Topology Admit Handler" podUID="3eec3404-cdc2-4158-88b9-df4e4203f290" podNamespace="kube-system" podName="kube-proxy-w9mzs"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.717183    3430 topology_manager.go:215] "Topology Admit Handler" podUID="de016395-52a7-4569-945d-dc21c63529c1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qrd57"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.737998    3430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.759037    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eec3404-cdc2-4158-88b9-df4e4203f290-lib-modules\") pod \"kube-proxy-w9mzs\" (UID: \"3eec3404-cdc2-4158-88b9-df4e4203f290\") " pod="kube-system/kube-proxy-w9mzs"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.759306    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eec3404-cdc2-4158-88b9-df4e4203f290-xtables-lock\") pod \"kube-proxy-w9mzs\" (UID: \"3eec3404-cdc2-4158-88b9-df4e4203f290\") " pod="kube-system/kube-proxy-w9mzs"
	Apr 17 19:45:05 pause-646953 kubelet[3430]: I0417 19:45:05.020069    3430 scope.go:117] "RemoveContainer" containerID="1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-646953 -n pause-646953
helpers_test.go:261: (dbg) Run:  kubectl --context pause-646953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-646953 -n pause-646953
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-646953 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-646953 logs -n 25: (1.54652292s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl status kubelet --all                       |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat kubelet                                |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat docker                                 |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo docker                        | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | system info                                          |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat cri-docker                             |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | cri-dockerd --version                                |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | systemctl status containerd                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat containerd                             |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo cat                           | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | containerd config dump                               |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl status crio --all                          |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo                               | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo find                          | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |                |                     |                     |
	| ssh     | -p kindnet-450558 sudo crio                          | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	|         | config                                               |                       |         |                |                     |                     |
	| delete  | -p kindnet-450558                                    | kindnet-450558        | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC | 17 Apr 24 19:45 UTC |
	| start   | -p custom-flannel-450558                             | custom-flannel-450558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 19:45 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |                |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |                |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |                |                     |                     |
	|         | --driver=kvm2                                        |                       |         |                |                     |                     |
	|         | --container-runtime=crio                             |                       |         |                |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 19:45:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 19:45:13.746174  128079 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:45:13.746309  128079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:45:13.746324  128079 out.go:304] Setting ErrFile to fd 2...
	I0417 19:45:13.746332  128079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:45:13.746657  128079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:45:13.747516  128079 out.go:298] Setting JSON to false
	I0417 19:45:13.749405  128079 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12462,"bootTime":1713370652,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 19:45:13.749548  128079 start.go:139] virtualization: kvm guest
	I0417 19:45:13.835925  128079 out.go:177] * [custom-flannel-450558] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 19:45:13.980650  128079 notify.go:220] Checking for updates...
	I0417 19:45:14.095552  128079 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 19:45:14.110976  128079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 19:45:14.193914  128079 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:45:14.316834  128079 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 19:45:12.466433  126291 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.850561377s)
	I0417 19:45:12.466474  126291 crio.go:469] duration metric: took 2.850693904s to extract the tarball
	I0417 19:45:12.466486  126291 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0417 19:45:12.507156  126291 ssh_runner.go:195] Run: sudo crictl images --output json
	I0417 19:45:12.555373  126291 crio.go:514] all images are preloaded for cri-o runtime.
	I0417 19:45:12.555397  126291 cache_images.go:84] Images are preloaded, skipping loading
	I0417 19:45:12.555405  126291 kubeadm.go:928] updating node { 192.168.50.12 8443 v1.30.0-rc.2 crio true true} ...
	I0417 19:45:12.555505  126291 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-450558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:calico-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0417 19:45:12.555567  126291 ssh_runner.go:195] Run: crio config
	I0417 19:45:12.610249  126291 cni.go:84] Creating CNI manager for "calico"
	I0417 19:45:12.610281  126291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0417 19:45:12.610314  126291 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.12 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-450558 NodeName:calico-450558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0417 19:45:12.610494  126291 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-450558"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0417 19:45:12.610564  126291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0417 19:45:12.622689  126291 binaries.go:44] Found k8s binaries, skipping transfer
	I0417 19:45:12.622752  126291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0417 19:45:12.634211  126291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0417 19:45:12.655022  126291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0417 19:45:12.682353  126291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0417 19:45:12.705157  126291 ssh_runner.go:195] Run: grep 192.168.50.12	control-plane.minikube.internal$ /etc/hosts
	I0417 19:45:12.711107  126291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0417 19:45:12.729060  126291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:45:12.854230  126291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:45:12.875732  126291 certs.go:68] Setting up /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558 for IP: 192.168.50.12
	I0417 19:45:12.875759  126291 certs.go:194] generating shared ca certs ...
	I0417 19:45:12.875779  126291 certs.go:226] acquiring lock for ca certs: {Name:mk03352eb76143f462405cfdf4da402444190b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.875953  126291 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key
	I0417 19:45:12.876020  126291 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key
	I0417 19:45:12.876038  126291 certs.go:256] generating profile certs ...
	I0417 19:45:12.876115  126291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key
	I0417 19:45:12.876146  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt with IP's: []
	I0417 19:45:12.990369  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt ...
	I0417 19:45:12.990400  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt: {Name:mk005ac5a5a37a9f80cf82c5b80f4d0942d05f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.990591  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key ...
	I0417 19:45:12.990613  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.key: {Name:mk51a2b07c04251dec4aa36ceacb335ebab56fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:12.990723  126291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401
	I0417 19:45:12.990746  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.12]
	I0417 19:45:13.136716  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 ...
	I0417 19:45:13.136747  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401: {Name:mkd852c1af45888fb6d57d5ac625d75ed407b00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.136940  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401 ...
	I0417 19:45:13.136959  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401: {Name:mk15f10bd3142caa117463d65ea09321dea4df45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.137081  126291 certs.go:381] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt.3344d401 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt
	I0417 19:45:13.137174  126291 certs.go:385] copying /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key.3344d401 -> /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key
	I0417 19:45:13.137229  126291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key
	I0417 19:45:13.137245  126291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt with IP's: []
	I0417 19:45:13.281098  126291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt ...
	I0417 19:45:13.281128  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt: {Name:mk16e0a0a8c483281e183044f6a8d5dcaa8f6454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.281303  126291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key ...
	I0417 19:45:13.281318  126291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key: {Name:mk0cc19dc89a110f06e530290c8f3a2805d07250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:13.281520  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem (1338 bytes)
	W0417 19:45:13.281557  126291 certs.go:480] ignoring /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207_empty.pem, impossibly tiny 0 bytes
	I0417 19:45:13.281567  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca-key.pem (1679 bytes)
	I0417 19:45:13.281594  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/ca.pem (1078 bytes)
	I0417 19:45:13.281615  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/cert.pem (1123 bytes)
	I0417 19:45:13.281643  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/certs/key.pem (1679 bytes)
	I0417 19:45:13.281686  126291 certs.go:484] found cert: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem (1708 bytes)
	I0417 19:45:13.282252  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0417 19:45:13.311871  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0417 19:45:13.340877  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0417 19:45:13.367893  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0417 19:45:13.395863  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0417 19:45:13.425720  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0417 19:45:13.455409  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0417 19:45:13.483577  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0417 19:45:13.512857  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0417 19:45:13.540302  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/certs/83207.pem --> /usr/share/ca-certificates/83207.pem (1338 bytes)
	I0417 19:45:13.580463  126291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/ssl/certs/832072.pem --> /usr/share/ca-certificates/832072.pem (1708 bytes)
	I0417 19:45:13.605236  126291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0417 19:45:13.626452  126291 ssh_runner.go:195] Run: openssl version
	I0417 19:45:13.632908  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0417 19:45:13.645961  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.651153  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 17 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.651219  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0417 19:45:13.657372  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0417 19:45:13.669628  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83207.pem && ln -fs /usr/share/ca-certificates/83207.pem /etc/ssl/certs/83207.pem"
	I0417 19:45:13.684251  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.689719  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 17 18:40 /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.689782  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83207.pem
	I0417 19:45:13.696075  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83207.pem /etc/ssl/certs/51391683.0"
	I0417 19:45:13.709195  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832072.pem && ln -fs /usr/share/ca-certificates/832072.pem /etc/ssl/certs/832072.pem"
	I0417 19:45:13.722790  126291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.728473  126291 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 17 18:40 /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.728610  126291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832072.pem
	I0417 19:45:13.735471  126291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/832072.pem /etc/ssl/certs/3ec20f2e.0"
	I0417 19:45:13.750244  126291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0417 19:45:13.754878  126291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0417 19:45:13.754929  126291 kubeadm.go:391] StartCluster: {Name:calico-450558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-r
c.2 ClusterName:calico-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:45:13.755020  126291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0417 19:45:13.755083  126291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0417 19:45:13.803045  126291 cri.go:89] found id: ""
	I0417 19:45:13.803125  126291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0417 19:45:13.814595  126291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0417 19:45:13.825273  126291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0417 19:45:13.837863  126291 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0417 19:45:13.837885  126291 kubeadm.go:156] found existing configuration files:
	
	I0417 19:45:13.837940  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0417 19:45:13.848164  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0417 19:45:13.848220  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0417 19:45:13.859096  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0417 19:45:13.869961  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0417 19:45:13.870037  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0417 19:45:13.880632  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0417 19:45:13.890644  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0417 19:45:13.890735  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0417 19:45:13.901936  126291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0417 19:45:13.912389  126291 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0417 19:45:13.912461  126291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0417 19:45:13.923838  126291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0417 19:45:13.984201  126291 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0417 19:45:13.984338  126291 kubeadm.go:309] [preflight] Running pre-flight checks
	I0417 19:45:14.119332  126291 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0417 19:45:14.119467  126291 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0417 19:45:14.119614  126291 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0417 19:45:14.376528  126291 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0417 19:45:14.490783  128079 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 19:45:14.607675  128079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 19:45:14.628432  128079 config.go:182] Loaded profile config "calico-450558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.628622  128079 config.go:182] Loaded profile config "kubernetes-upgrade-365550": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.628858  128079 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:14.629004  128079 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 19:45:14.667214  128079 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 19:45:14.668540  128079 start.go:297] selected driver: kvm2
	I0417 19:45:14.668557  128079 start.go:901] validating driver "kvm2" against <nil>
	I0417 19:45:14.668574  128079 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 19:45:14.669388  128079 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:45:14.669486  128079 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 19:45:14.686852  128079 install.go:137] /home/jenkins/minikube-integration/18665-75973/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 19:45:14.686910  128079 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 19:45:14.687231  128079 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:45:14.687315  128079 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0417 19:45:14.687342  128079 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0417 19:45:14.687416  128079 start.go:340] cluster config:
	{Name:custom-flannel-450558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:custom-flannel-450558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 19:45:14.687533  128079 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 19:45:14.689563  128079 out.go:177] * Starting "custom-flannel-450558" primary control-plane node in "custom-flannel-450558" cluster
	I0417 19:45:10.635142  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:10.635725  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:10.635758  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:10.635668  127260 retry.go:31] will retry after 1.045237031s: waiting for machine to come up
	I0417 19:45:11.682317  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:11.682822  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:11.682855  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:11.682808  127260 retry.go:31] will retry after 1.273142266s: waiting for machine to come up
	I0417 19:45:13.087127  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:13.087655  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:13.087688  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:13.087597  127260 retry.go:31] will retry after 1.137015199s: waiting for machine to come up
	I0417 19:45:14.226274  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:14.227003  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:14.227037  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:14.226955  127260 retry.go:31] will retry after 1.508439303s: waiting for machine to come up
	I0417 19:45:14.490755  126291 out.go:204]   - Generating certificates and keys ...
	I0417 19:45:14.490908  126291 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0417 19:45:14.491022  126291 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0417 19:45:14.514454  126291 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0417 19:45:14.641385  126291 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0417 19:45:14.785918  126291 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0417 19:45:14.941821  126291 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0417 19:45:15.240188  126291 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0417 19:45:15.240387  126291 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-450558 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I0417 19:45:15.351493  126291 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0417 19:45:15.351852  126291 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-450558 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I0417 19:45:15.410048  126291 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0417 19:45:15.577060  126291 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0417 19:45:15.758418  126291 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0417 19:45:15.758702  126291 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0417 19:45:15.946009  126291 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0417 19:45:16.105543  126291 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0417 19:45:16.218848  126291 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0417 19:45:16.368011  126291 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0417 19:45:16.529800  126291 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0417 19:45:16.530769  126291 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0417 19:45:16.534968  126291 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0417 19:45:14.233703  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:16.319978  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:14.691067  128079 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 19:45:14.691120  128079 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 19:45:14.691140  128079 cache.go:56] Caching tarball of preloaded images
	I0417 19:45:14.691272  128079 preload.go:173] Found /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0417 19:45:14.691285  128079 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 19:45:14.691419  128079 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/config.json ...
	I0417 19:45:14.691442  128079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/config.json: {Name:mk89d01c6f9ae3e421c30740631cfd3eeff8b840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:14.691605  128079 start.go:360] acquireMachinesLock for custom-flannel-450558: {Name:mk477aec4c276e4d91f705836ed4842912dfe2c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0417 19:45:18.320650  124898 pod_ready.go:102] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"False"
	I0417 19:45:18.820697  124898 pod_ready.go:92] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.820737  124898 pod_ready.go:81] duration metric: took 11.508740323s for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.820750  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.826870  124898 pod_ready.go:92] pod "kube-controller-manager-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.826896  124898 pod_ready.go:81] duration metric: took 6.136523ms for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.826910  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.833641  124898 pod_ready.go:92] pod "kube-proxy-w9mzs" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.833731  124898 pod_ready.go:81] duration metric: took 6.803774ms for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.833759  124898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.841148  124898 pod_ready.go:92] pod "kube-scheduler-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:18.841174  124898 pod_ready.go:81] duration metric: took 7.383796ms for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:18.841184  124898 pod_ready.go:38] duration metric: took 13.051412994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:18.841229  124898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0417 19:45:18.855307  124898 ops.go:34] apiserver oom_adj: -16
	I0417 19:45:18.855337  124898 kubeadm.go:591] duration metric: took 40.282773589s to restartPrimaryControlPlane
	I0417 19:45:18.855349  124898 kubeadm.go:393] duration metric: took 40.602524348s to StartCluster
	I0417 19:45:18.855371  124898 settings.go:142] acquiring lock: {Name:mk5d952127253ee5e60e06b072b3460ff4f86e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:18.855458  124898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 19:45:18.856374  124898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/kubeconfig: {Name:mkca968a5a2538f9f961af0f359c1b9923864131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 19:45:18.856679  124898 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.156 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0417 19:45:18.858358  124898 out.go:177] * Verifying Kubernetes components...
	I0417 19:45:18.856809  124898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0417 19:45:18.856991  124898 config.go:182] Loaded profile config "pause-646953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:45:18.862416  124898 out.go:177] * Enabled addons: 
	I0417 19:45:15.737500  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:15.738127  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:15.738156  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:15.738075  127260 retry.go:31] will retry after 2.611198893s: waiting for machine to come up
	I0417 19:45:18.352177  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | domain kubernetes-upgrade-365550 has defined MAC address 52:54:00:6b:0b:f3 in network mk-kubernetes-upgrade-365550
	I0417 19:45:18.352793  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | unable to find current IP address of domain kubernetes-upgrade-365550 in network mk-kubernetes-upgrade-365550
	I0417 19:45:18.352828  127107 main.go:141] libmachine: (kubernetes-upgrade-365550) DBG | I0417 19:45:18.352747  127260 retry.go:31] will retry after 3.3267745s: waiting for machine to come up
	I0417 19:45:16.536958  126291 out.go:204]   - Booting up control plane ...
	I0417 19:45:16.537075  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0417 19:45:16.537188  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0417 19:45:16.537309  126291 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0417 19:45:16.553139  126291 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0417 19:45:16.554065  126291 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0417 19:45:16.554157  126291 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0417 19:45:16.696003  126291 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0417 19:45:16.696130  126291 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0417 19:45:17.197794  126291 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.24812ms
	I0417 19:45:17.197909  126291 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0417 19:45:18.860119  124898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0417 19:45:18.863921  124898 addons.go:505] duration metric: took 7.157694ms for enable addons: enabled=[]
	I0417 19:45:19.094721  124898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0417 19:45:19.114864  124898 node_ready.go:35] waiting up to 6m0s for node "pause-646953" to be "Ready" ...
	I0417 19:45:19.119485  124898 node_ready.go:49] node "pause-646953" has status "Ready":"True"
	I0417 19:45:19.119512  124898 node_ready.go:38] duration metric: took 4.601149ms for node "pause-646953" to be "Ready" ...
	I0417 19:45:19.119524  124898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:19.126630  124898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.217045  124898 pod_ready.go:92] pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:19.217079  124898 pod_ready.go:81] duration metric: took 90.415368ms for pod "coredns-7db6d8ff4d-qrd57" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.217091  124898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.617768  124898 pod_ready.go:92] pod "etcd-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:19.617798  124898 pod_ready.go:81] duration metric: took 400.698195ms for pod "etcd-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:19.617810  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.016609  124898 pod_ready.go:92] pod "kube-apiserver-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.016633  124898 pod_ready.go:81] duration metric: took 398.81534ms for pod "kube-apiserver-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.016644  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.417144  124898 pod_ready.go:92] pod "kube-controller-manager-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.417170  124898 pod_ready.go:81] duration metric: took 400.51881ms for pod "kube-controller-manager-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.417185  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.817291  124898 pod_ready.go:92] pod "kube-proxy-w9mzs" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:20.817317  124898 pod_ready.go:81] duration metric: took 400.125061ms for pod "kube-proxy-w9mzs" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:20.817327  124898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:21.216898  124898 pod_ready.go:92] pod "kube-scheduler-pause-646953" in "kube-system" namespace has status "Ready":"True"
	I0417 19:45:21.216925  124898 pod_ready.go:81] duration metric: took 399.591159ms for pod "kube-scheduler-pause-646953" in "kube-system" namespace to be "Ready" ...
	I0417 19:45:21.216936  124898 pod_ready.go:38] duration metric: took 2.097399721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0417 19:45:21.216962  124898 api_server.go:52] waiting for apiserver process to appear ...
	I0417 19:45:21.217033  124898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:45:21.231787  124898 api_server.go:72] duration metric: took 2.375066625s to wait for apiserver process to appear ...
	I0417 19:45:21.231818  124898 api_server.go:88] waiting for apiserver healthz status ...
	I0417 19:45:21.231841  124898 api_server.go:253] Checking apiserver healthz at https://192.168.61.156:8443/healthz ...
	I0417 19:45:21.237395  124898 api_server.go:279] https://192.168.61.156:8443/healthz returned 200:
	ok
	I0417 19:45:21.238727  124898 api_server.go:141] control plane version: v1.30.0-rc.2
	I0417 19:45:21.238750  124898 api_server.go:131] duration metric: took 6.925029ms to wait for apiserver health ...
	I0417 19:45:21.238765  124898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0417 19:45:21.419064  124898 system_pods.go:59] 6 kube-system pods found
	I0417 19:45:21.419091  124898 system_pods.go:61] "coredns-7db6d8ff4d-qrd57" [de016395-52a7-4569-945d-dc21c63529c1] Running
	I0417 19:45:21.419095  124898 system_pods.go:61] "etcd-pause-646953" [7678dd72-ab0d-4fe8-b612-89a207c7fd46] Running
	I0417 19:45:21.419099  124898 system_pods.go:61] "kube-apiserver-pause-646953" [df5e6bfd-761e-4f49-a8ad-7d988a44f51d] Running
	I0417 19:45:21.419104  124898 system_pods.go:61] "kube-controller-manager-pause-646953" [07582c8b-da03-4a06-a628-0d023365d3c6] Running
	I0417 19:45:21.419108  124898 system_pods.go:61] "kube-proxy-w9mzs" [3eec3404-cdc2-4158-88b9-df4e4203f290] Running
	I0417 19:45:21.419111  124898 system_pods.go:61] "kube-scheduler-pause-646953" [de17acd1-919a-479d-ac56-bde1c4f78406] Running
	I0417 19:45:21.419117  124898 system_pods.go:74] duration metric: took 180.346079ms to wait for pod list to return data ...
	I0417 19:45:21.419123  124898 default_sa.go:34] waiting for default service account to be created ...
	I0417 19:45:21.617499  124898 default_sa.go:45] found service account: "default"
	I0417 19:45:21.617535  124898 default_sa.go:55] duration metric: took 198.404037ms for default service account to be created ...
	I0417 19:45:21.617554  124898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0417 19:45:21.819662  124898 system_pods.go:86] 6 kube-system pods found
	I0417 19:45:21.819691  124898 system_pods.go:89] "coredns-7db6d8ff4d-qrd57" [de016395-52a7-4569-945d-dc21c63529c1] Running
	I0417 19:45:21.819697  124898 system_pods.go:89] "etcd-pause-646953" [7678dd72-ab0d-4fe8-b612-89a207c7fd46] Running
	I0417 19:45:21.819701  124898 system_pods.go:89] "kube-apiserver-pause-646953" [df5e6bfd-761e-4f49-a8ad-7d988a44f51d] Running
	I0417 19:45:21.819705  124898 system_pods.go:89] "kube-controller-manager-pause-646953" [07582c8b-da03-4a06-a628-0d023365d3c6] Running
	I0417 19:45:21.819709  124898 system_pods.go:89] "kube-proxy-w9mzs" [3eec3404-cdc2-4158-88b9-df4e4203f290] Running
	I0417 19:45:21.819712  124898 system_pods.go:89] "kube-scheduler-pause-646953" [de17acd1-919a-479d-ac56-bde1c4f78406] Running
	I0417 19:45:21.819721  124898 system_pods.go:126] duration metric: took 202.159457ms to wait for k8s-apps to be running ...
	I0417 19:45:21.819730  124898 system_svc.go:44] waiting for kubelet service to be running ....
	I0417 19:45:21.819783  124898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:45:21.838369  124898 system_svc.go:56] duration metric: took 18.624253ms WaitForService to wait for kubelet
	I0417 19:45:21.838404  124898 kubeadm.go:576] duration metric: took 2.981691937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0417 19:45:21.838442  124898 node_conditions.go:102] verifying NodePressure condition ...
	I0417 19:45:22.016815  124898 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0417 19:45:22.016847  124898 node_conditions.go:123] node cpu capacity is 2
	I0417 19:45:22.016859  124898 node_conditions.go:105] duration metric: took 178.407477ms to run NodePressure ...
	I0417 19:45:22.016877  124898 start.go:240] waiting for startup goroutines ...
	I0417 19:45:22.016887  124898 start.go:245] waiting for cluster config update ...
	I0417 19:45:22.016896  124898 start.go:254] writing updated cluster config ...
	I0417 19:45:22.017225  124898 ssh_runner.go:195] Run: rm -f paused
	I0417 19:45:22.072386  124898 start.go:600] kubectl: 1.29.4, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0417 19:45:22.074010  124898 out.go:177] * Done! kubectl is now configured to use "pause-646953" cluster and "default" namespace by default
	I0417 19:45:22.696455  126291 kubeadm.go:309] [api-check] The API server is healthy after 5.501728032s
	I0417 19:45:22.717794  126291 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0417 19:45:22.735178  126291 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0417 19:45:22.766289  126291 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0417 19:45:22.766545  126291 kubeadm.go:309] [mark-control-plane] Marking the node calico-450558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0417 19:45:22.779359  126291 kubeadm.go:309] [bootstrap-token] Using token: yddcl9.7p1apq4bqaadwh0b
	I0417 19:45:22.780991  126291 out.go:204]   - Configuring RBAC rules ...
	I0417 19:45:22.781152  126291 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0417 19:45:22.787324  126291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0417 19:45:22.807655  126291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0417 19:45:22.814102  126291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0417 19:45:22.819796  126291 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0417 19:45:22.829800  126291 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0417 19:45:23.107446  126291 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0417 19:45:23.546512  126291 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0417 19:45:24.111576  126291 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0417 19:45:24.111602  126291 kubeadm.go:309] 
	I0417 19:45:24.111684  126291 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0417 19:45:24.111690  126291 kubeadm.go:309] 
	I0417 19:45:24.111796  126291 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0417 19:45:24.111803  126291 kubeadm.go:309] 
	I0417 19:45:24.111856  126291 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0417 19:45:24.111937  126291 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0417 19:45:24.112006  126291 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0417 19:45:24.112011  126291 kubeadm.go:309] 
	I0417 19:45:24.112081  126291 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0417 19:45:24.112087  126291 kubeadm.go:309] 
	I0417 19:45:24.112142  126291 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0417 19:45:24.112148  126291 kubeadm.go:309] 
	I0417 19:45:24.112209  126291 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0417 19:45:24.112291  126291 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0417 19:45:24.112384  126291 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0417 19:45:24.112408  126291 kubeadm.go:309] 
	I0417 19:45:24.112511  126291 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0417 19:45:24.112605  126291 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0417 19:45:24.112610  126291 kubeadm.go:309] 
	I0417 19:45:24.112704  126291 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yddcl9.7p1apq4bqaadwh0b \
	I0417 19:45:24.112839  126291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 \
	I0417 19:45:24.112867  126291 kubeadm.go:309] 	--control-plane 
	I0417 19:45:24.112915  126291 kubeadm.go:309] 
	I0417 19:45:24.113060  126291 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0417 19:45:24.113099  126291 kubeadm.go:309] 
	I0417 19:45:24.113201  126291 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yddcl9.7p1apq4bqaadwh0b \
	I0417 19:45:24.113351  126291 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:48f645e13fdefadc8e826582d6403b190039751bc152ed40dae3f9a02a350767 
	I0417 19:45:24.113759  126291 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0417 19:45:24.113832  126291 cni.go:84] Creating CNI manager for "calico"
	I0417 19:45:24.115469  126291 out.go:177] * Configuring Calico (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.938364132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5cf0abb-96f8-4c8a-80cf-80d3fc8d20ce name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.940720268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5c10532-f00a-4097-9a2b-f3d03a66e457 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.941125607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383124941097265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5c10532-f00a-4097-9a2b-f3d03a66e457 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.945541748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fa58d88-98c3-4ef6-add5-1a7c6bf8293a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.945597680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fa58d88-98c3-4ef6-add5-1a7c6bf8293a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:24 pause-646953 crio[2440]: time="2024-04-17 19:45:24.945829704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fa58d88-98c3-4ef6-add5-1a7c6bf8293a name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.016767713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc81b3f2-8ead-48a1-a6a7-12e781c36e94 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.016870492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc81b3f2-8ead-48a1-a6a7-12e781c36e94 name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.020214262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3cbf8d1-f8d6-413e-a2ac-aab155119625 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.020742768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383125020717077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3cbf8d1-f8d6-413e-a2ac-aab155119625 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.022017713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17c42a7c-1e38-4721-b644-ef0dd679ce79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.022099509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17c42a7c-1e38-4721-b644-ef0dd679ce79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.022397904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17c42a7c-1e38-4721-b644-ef0dd679ce79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.049921137Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7647c1c1-8bb4-4ff9-bc7d-40727e23494d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.050158153Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrd57,Uid:de016395-52a7-4569-945d-dc21c63529c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077815338280,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:43:40.088306538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-646953,Uid:77fe5231f0f485a3b159d32d2c47ceda,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077651900596,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77fe5231f0f485a3b159d32d2c47ceda,kubernetes.io/config.seen: 2024-04-17T19:43:26.086740844Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-646953,Uid:e36f35028cccbcd2d15942038a695af3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077613756298,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccb
cd2d15942038a695af3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e36f35028cccbcd2d15942038a695af3,kubernetes.io/config.seen: 2024-04-17T19:43:26.086739822Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&PodSandboxMetadata{Name:etcd-pause-646953,Uid:148dc06994f8c7b5b9f6353a4eca512b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077599408286,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.156:2379,kubernetes.io/config.hash: 148dc06994f8c7b5b9f6353a4eca512b,kubernetes.io/config.seen: 2024-04-17T19:43:26.086741846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-646953,Uid:93b87bf1dac16be7cd1f2193d3029fa7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713383077569478883,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.156:8443,kubernetes.io/config.hash: 93b87bf1dac16be7cd1f2193d3029fa7,kubernetes.io/config.seen: 2024-04-17T19:43:26.086736175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&PodSandboxMetadata{Name:kube-proxy-w9mzs,Uid:3eec3404-cdc2-4158-88b9-df4e4203f290,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1713383077541795559,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:43:39.934636510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrd57,Uid:de016395-52a7-4569-945d-dc21c63529c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713383020426175982,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-04-17T19:43:40.088306538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:859f68351d99ebb0227bc5d6d3f74084ded58b5dc000662e5450f792809dc9a1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g44v,Uid:db2225b0-33a7-4b90-903f-b8ca9cc1ecaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713383020359999211,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4g44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db2225b0-33a7-4b90-903f-b8ca9cc1ecaf,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-17T19:43:40.039698074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7647c1c1-8bb4-4ff9-bc7d-40727e23494d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.050728141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fb4fb77-ce3e-476e-b037-64db435d8ee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.050821027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fb4fb77-ce3e-476e-b037-64db435d8ee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.051095583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fb4fb77-ce3e-476e-b037-64db435d8ee0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.087373762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=028caae7-076d-4b9f-84ab-f70057a795bb name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.087503183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=028caae7-076d-4b9f-84ab-f70057a795bb name=/runtime.v1.RuntimeService/Version
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.089439631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3e9b746-55e6-4820-9a3b-b63fbfc3ecd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.090176936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713383125090144973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3e9b746-55e6-4820-9a3b-b63fbfc3ecd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.091420890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b717db26-d568-4fc5-8aea-f5259b8b9b0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.091516680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b717db26-d568-4fc5-8aea-f5259b8b9b0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 17 19:45:25 pause-646953 crio[2440]: time="2024-04-17 19:45:25.091847283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713383105053521571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713383101272180206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713383101253747867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713383101237105236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713383089286365129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752,PodSandboxId:2045deaa871b2fff5f36492c56be95bc73b153626f3b0420fedfc26dd2d23150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713383078847983567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4,PodSandboxId:c54196a452145ba077c351d319511e348bdb984ef840c3f94c47ddddf973eda6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713383078142186775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36f35028cccbcd2d15942038a695af3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea,PodSandboxId:f442bef04d830d7dc3c1043c85dd54ef00fa4481346615871200656d9cbacd3b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713383078030347652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 148dc06994f8c7b5b9f6353a4eca512b,},Annotations:map[string]string{io.kubernetes.container.hash: f0b9a83b,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74,PodSandboxId:2ca3182af06defee5c9ddebc7662e29923df79c1d5abfcd1854ed5a645e43929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713383077883099368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9mzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eec3404-cdc2-4158-88b9-df4e4203f290,},Annotations:map[string]string{io.kubernetes.container.hash: 38ade32f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7,PodSandboxId:1f09d2ea782526560be2088007fea72cf19ae5253cc2389c98d058cdddaa81e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713383077998655436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fe5231f0f485a3b159d32d2c47ceda,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9,PodSandboxId:1b7852dacf760726a99a426e63a0898c07f38a29b4a8ac61662c3200ef59834c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713383077950650619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-646953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b87bf1dac16be7cd1f2193d3029fa7,},Annotations:map[string]string{io.kubernetes.container.hash: c9e7f0c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4,PodSandboxId:c191b422a8899ac8a5e8e81e0beda7ff49a1c3b5dc5eff877ba0993179689932,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713383021451680290,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qrd57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de016395-52a7-4569-945d-dc21c63529c1,},Annotations:map[string]string{io.kubernetes.container.hash: 82b324fd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b717db26-d568-4fc5-8aea-f5259b8b9b0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ed7515695bd57       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   20 seconds ago       Running             kube-proxy                2                   2ca3182af06de       kube-proxy-w9mzs
	f19a30628afd0       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   23 seconds ago       Running             kube-controller-manager   2                   c54196a452145       kube-controller-manager-pause-646953
	0b3e088056d8e       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   23 seconds ago       Running             kube-apiserver            2                   1b7852dacf760       kube-apiserver-pause-646953
	e0a1e6dd8fc61       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   23 seconds ago       Running             kube-scheduler            2                   1f09d2ea78252       kube-scheduler-pause-646953
	ca53906416edd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago       Running             etcd                      2                   f442bef04d830       etcd-pause-646953
	e12360ee1bcc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago       Running             coredns                   1                   2045deaa871b2       coredns-7db6d8ff4d-qrd57
	618429c1fd685       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   47 seconds ago       Exited              kube-controller-manager   1                   c54196a452145       kube-controller-manager-pause-646953
	76445264e28d7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   47 seconds ago       Exited              etcd                      1                   f442bef04d830       etcd-pause-646953
	e8370f2f797f7       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   47 seconds ago       Exited              kube-scheduler            1                   1f09d2ea78252       kube-scheduler-pause-646953
	79a0007ae410b       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   47 seconds ago       Exited              kube-apiserver            1                   1b7852dacf760       kube-apiserver-pause-646953
	1a9049c5a40d7       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   47 seconds ago       Exited              kube-proxy                1                   2ca3182af06de       kube-proxy-w9mzs
	0d41c03af85dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   c191b422a8899       coredns-7db6d8ff4d-qrd57
	
	
	==> coredns [0d41c03af85dcb2d74fab54057fceb1ba122942a3d0799d2f097cfab41d28cd4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[964633467]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.796) (total time: 29056ms):
	Trace[964633467]: ---"Objects listed" error:<nil> 29056ms (19:44:10.853)
	Trace[964633467]: [29.05698192s] [29.05698192s] END
	[INFO] plugin/kubernetes: Trace[1286549125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.796) (total time: 29058ms):
	Trace[1286549125]: ---"Objects listed" error:<nil> 29057ms (19:44:10.854)
	Trace[1286549125]: [29.058000718s] [29.058000718s] END
	[INFO] plugin/kubernetes: Trace[1957297417]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:43:41.799) (total time: 29055ms):
	Trace[1957297417]: ---"Objects listed" error:<nil> 29055ms (19:44:10.855)
	Trace[1957297417]: [29.055889272s] [29.055889272s] END
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43252 - 48434 "HINFO IN 8550402545300384940.1506769839142304122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020789501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e12360ee1bcc479db42dbead73e9fadc07f16864a00ed11fa92b20d6de744752] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44004 - 56259 "HINFO IN 7933819828100252723.1763484357391934585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050024402s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[284303994]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.194) (total time: 10002ms):
	Trace[284303994]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:44:49.196)
	Trace[284303994]: [10.002028079s] [10.002028079s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1080710972]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.196) (total time: 10001ms):
	Trace[1080710972]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:44:49.196)
	Trace[1080710972]: [10.001000918s] [10.001000918s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[270417557]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 19:44:39.195) (total time: 10001ms):
	Trace[270417557]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:44:49.196)
	Trace[270417557]: [10.001524932s] [10.001524932s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-646953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-646953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=pause-646953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T19_43_26_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 19:43:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-646953
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 19:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 19:45:04 +0000   Wed, 17 Apr 2024 19:43:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.156
	  Hostname:    pause-646953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 022aaa8a557641dbb2d3d59ba114cd51
	  System UUID:                022aaa8a-5576-41db-b2d3-d59ba114cd51
	  Boot ID:                    34a371e5-ecd5-445a-affb-c069b1debdc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-qrd57                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-646953                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-pause-646953             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-pause-646953    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-proxy-w9mzs                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-646953             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 29s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m6s)  kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m6s)  kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m6s)  kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeReady                118s                 kubelet          Node pause-646953 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node pause-646953 event: Registered Node pause-646953 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-646953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-646953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-646953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                   node-controller  Node pause-646953 event: Registered Node pause-646953 in Controller
	
	
	==> dmesg <==
	[  +0.058513] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062749] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.189171] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.150626] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.280603] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.558352] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.058650] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.597739] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.441843] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.149120] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.081453] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.375375] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.012974] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[ +11.559884] kauditd_printk_skb: 88 callbacks suppressed
	[Apr17 19:44] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.131022] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +0.168595] systemd-fstab-generator[2384]: Ignoring "noauto" option for root device
	[  +0.140326] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.284497] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +6.927176] systemd-fstab-generator[2553]: Ignoring "noauto" option for root device
	[  +0.091065] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.262432] kauditd_printk_skb: 87 callbacks suppressed
	[ +11.135827] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[Apr17 19:45] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.804925] systemd-fstab-generator[3777]: Ignoring "noauto" option for root device
	
	
	==> etcd [76445264e28d762463b3403f5cdf39c2513bfaf30bc2330c4884bedb5de7eaea] <==
	{"level":"info","ts":"2024-04-17T19:44:38.595648Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"70.11274ms"}
	{"level":"info","ts":"2024-04-17T19:44:38.626981Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-17T19:44:38.649875Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","commit-index":463}
	{"level":"info","ts":"2024-04-17T19:44:38.650058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-17T19:44:38.650133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became follower at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:38.650183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d4d2bfcafeea25a0 [peers: [], term: 2, commit: 463, applied: 0, lastindex: 463, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-17T19:44:38.659477Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-17T19:44:38.697755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":443}
	{"level":"info","ts":"2024-04-17T19:44:38.707673Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-17T19:44:38.717426Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d4d2bfcafeea25a0","timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:44:38.717814Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d4d2bfcafeea25a0"}
	{"level":"info","ts":"2024-04-17T19:44:38.717884Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"d4d2bfcafeea25a0","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-17T19:44:38.718131Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-17T19:44:38.724336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724414Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724427Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-17T19:44:38.724674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 switched to configuration voters=(15335530559731017120)"}
	{"level":"info","ts":"2024-04-17T19:44:38.724744Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","added-peer-id":"d4d2bfcafeea25a0","added-peer-peer-urls":["https://192.168.61.156:2380"]}
	{"level":"info","ts":"2024-04-17T19:44:38.72484Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d4c5be1f342e7586","local-member-id":"d4d2bfcafeea25a0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:44:38.724886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T19:44:38.751437Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-17T19:44:38.751639Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:38.751796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:38.758327Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4d2bfcafeea25a0","initial-advertise-peer-urls":["https://192.168.61.156:2380"],"listen-peer-urls":["https://192.168.61.156:2380"],"advertise-client-urls":["https://192.168.61.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-17T19:44:38.758391Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [ca53906416edd23556aa3667e3f101da8cf3977f07976d42cef97b22b46109ff] <==
	{"level":"info","ts":"2024-04-17T19:44:49.439637Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.156:2380"}
	{"level":"info","ts":"2024-04-17T19:44:51.022846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.022958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.023024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 received MsgPreVoteResp from d4d2bfcafeea25a0 at term 2"}
	{"level":"info","ts":"2024-04-17T19:44:51.023061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became candidate at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 received MsgVoteResp from d4d2bfcafeea25a0 at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4d2bfcafeea25a0 became leader at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.023147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4d2bfcafeea25a0 elected leader d4d2bfcafeea25a0 at term 3"}
	{"level":"info","ts":"2024-04-17T19:44:51.029412Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4d2bfcafeea25a0","local-member-attributes":"{Name:pause-646953 ClientURLs:[https://192.168.61.156:2379]}","request-path":"/0/members/d4d2bfcafeea25a0/attributes","cluster-id":"d4c5be1f342e7586","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T19:44:51.029428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:44:51.029594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T19:44:51.029633Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T19:44:51.029509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T19:44:51.031402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.156:2379"}
	{"level":"info","ts":"2024-04-17T19:44:51.031608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T19:45:13.398175Z","caller":"traceutil/trace.go:171","msg":"trace[1491267071] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"258.183909ms","start":"2024-04-17T19:45:13.13993Z","end":"2024-04-17T19:45:13.398114Z","steps":["trace[1491267071] 'process raft request'  (duration: 257.739749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:45:14.212553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.556313ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2711324126771220590 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" mod_revision:505 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" value_size:6315 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-17T19:45:14.212905Z","caller":"traceutil/trace.go:171","msg":"trace[1857233745] linearizableReadLoop","detail":"{readStateIndex:545; appliedIndex:544; }","duration":"412.31772ms","start":"2024-04-17T19:45:13.800571Z","end":"2024-04-17T19:45:14.212888Z","steps":["trace[1857233745] 'read index received'  (duration: 22.407338ms)","trace[1857233745] 'applied index is now lower than readState.Index'  (duration: 389.908685ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:45:14.213029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"412.451597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-646953\" ","response":"range_response_count:1 size:7010"}
	{"level":"info","ts":"2024-04-17T19:45:14.21308Z","caller":"traceutil/trace.go:171","msg":"trace[990864836] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-646953; range_end:; response_count:1; response_revision:506; }","duration":"412.536305ms","start":"2024-04-17T19:45:13.800535Z","end":"2024-04-17T19:45:14.213071Z","steps":["trace[990864836] 'agreement among raft nodes before linearized reading'  (duration: 412.443317ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T19:45:14.21315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:45:13.800518Z","time spent":"412.614429ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7034,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-646953\" "}
	{"level":"info","ts":"2024-04-17T19:45:14.213529Z","caller":"traceutil/trace.go:171","msg":"trace[911604733] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"791.828314ms","start":"2024-04-17T19:45:13.421686Z","end":"2024-04-17T19:45:14.213514Z","steps":["trace[911604733] 'process raft request'  (duration: 401.259604ms)","trace[911604733] 'compare'  (duration: 388.166269ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T19:45:14.213672Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T19:45:13.421668Z","time spent":"791.96155ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6386,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" mod_revision:505 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" value_size:6315 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-646953\" > >"}
	{"level":"info","ts":"2024-04-17T19:45:14.615722Z","caller":"traceutil/trace.go:171","msg":"trace[1794128587] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"159.693693ms","start":"2024-04-17T19:45:14.456009Z","end":"2024-04-17T19:45:14.615703Z","steps":["trace[1794128587] 'process raft request'  (duration: 157.736335ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T19:45:15.635178Z","caller":"traceutil/trace.go:171","msg":"trace[95993324] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"159.093782ms","start":"2024-04-17T19:45:15.476059Z","end":"2024-04-17T19:45:15.635152Z","steps":["trace[95993324] 'process raft request'  (duration: 65.096496ms)","trace[95993324] 'compare'  (duration: 93.750914ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:45:25 up 2 min,  0 users,  load average: 1.39, 0.50, 0.19
	Linux pause-646953 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b3e088056d8eec3b1f5ded416bd1bc5447674878143e96081f16bed4b0b6f5a] <==
	I0417 19:45:04.178932       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0417 19:45:04.179633       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0417 19:45:04.179698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0417 19:45:04.181289       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 19:45:04.181375       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0417 19:45:04.181423       1 aggregator.go:165] initial CRD sync complete...
	I0417 19:45:04.181450       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 19:45:04.181472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 19:45:04.181493       1 cache.go:39] Caches are synced for autoregister controller
	I0417 19:45:04.187339       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 19:45:04.189385       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0417 19:45:04.888633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0417 19:45:05.434185       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.156]
	I0417 19:45:05.441209       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 19:45:05.467156       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0417 19:45:05.611103       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 19:45:05.635062       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 19:45:05.700191       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 19:45:05.748770       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 19:45:05.757540       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 19:45:14.215542       1 trace.go:236] Trace[1507521914]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e39bf3d5-e1bd-482c-bf93-edf722f694d9,client:192.168.61.156,api-group:,api-version:v1,name:kube-controller-manager-pause-646953,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-646953/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/22140b6,verb:PATCH (17-Apr-2024 19:45:13.412) (total time: 803ms):
	Trace[1507521914]: ["GuaranteedUpdate etcd3" audit-id:e39bf3d5-e1bd-482c-bf93-edf722f694d9,key:/pods/kube-system/kube-controller-manager-pause-646953,type:*core.Pod,resource:pods 803ms (19:45:13.412)
	Trace[1507521914]:  ---"Txn call completed" 795ms (19:45:14.214)]
	Trace[1507521914]: ---"Object stored in database" 796ms (19:45:14.214)
	Trace[1507521914]: [803.255906ms] [803.255906ms] END
	
	
	==> kube-apiserver [79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9] <==
	I0417 19:44:56.294874       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0417 19:44:56.295446       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0417 19:44:56.295597       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0417 19:44:56.295608       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0417 19:44:56.295628       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0417 19:44:56.295694       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0417 19:44:56.295778       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:44:56.296944       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0417 19:44:56.297319       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:44:56.299886       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0417 19:44:56.299960       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0417 19:44:56.299996       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0417 19:44:56.303371       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0417 19:44:56.312893       1 controller.go:157] Shutting down quota evaluator
	I0417 19:44:56.312927       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313035       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313043       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313049       1 controller.go:176] quota evaluator worker shutdown
	I0417 19:44:56.313053       1 controller.go:176] quota evaluator worker shutdown
	E0417 19:44:57.021605       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:57.022829       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0417 19:44:58.021309       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:58.022656       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0417 19:44:59.021995       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0417 19:44:59.023479       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-controller-manager [618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4] <==
	
	
	==> kube-controller-manager [f19a30628afd0a436bcd6a0f05055226b5957f96ec7e9474723e55a46c0d57dd] <==
	I0417 19:45:17.185305       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0417 19:45:17.207602       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0417 19:45:17.209933       1 shared_informer.go:320] Caches are synced for HPA
	I0417 19:45:17.211118       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0417 19:45:17.212218       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0417 19:45:17.213589       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0417 19:45:17.213695       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0417 19:45:17.213697       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0417 19:45:17.213715       1 shared_informer.go:320] Caches are synced for daemon sets
	I0417 19:45:17.222121       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0417 19:45:17.298851       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0417 19:45:17.299055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="129.394µs"
	I0417 19:45:17.330377       1 shared_informer.go:320] Caches are synced for disruption
	I0417 19:45:17.342801       1 shared_informer.go:320] Caches are synced for deployment
	I0417 19:45:17.352763       1 shared_informer.go:320] Caches are synced for attach detach
	I0417 19:45:17.357919       1 shared_informer.go:320] Caches are synced for persistent volume
	I0417 19:45:17.370592       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:45:17.382087       1 shared_informer.go:320] Caches are synced for expand
	I0417 19:45:17.399892       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 19:45:17.410430       1 shared_informer.go:320] Caches are synced for ephemeral
	I0417 19:45:17.433048       1 shared_informer.go:320] Caches are synced for stateful set
	I0417 19:45:17.439356       1 shared_informer.go:320] Caches are synced for PVC protection
	I0417 19:45:17.830792       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:45:17.893144       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 19:45:17.893206       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74] <==
	I0417 19:44:39.160859       1 server_linux.go:69] "Using iptables proxy"
	E0417 19:44:49.177688       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-646953\": net/http: TLS handshake timeout"
	I0417 19:44:56.213345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.156"]
	I0417 19:44:56.271074       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:44:56.271184       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:44:56.271222       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:44:56.276816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:44:56.277107       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:44:56.277301       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:44:56.281590       1 config.go:192] "Starting service config controller"
	I0417 19:44:56.281730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:44:56.282213       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:44:56.282371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:44:56.287891       1 config.go:319] "Starting node config controller"
	I0417 19:44:56.287925       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:44:56.382735       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:44:56.382879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:44:56.388036       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ed7515695bd5717e845f2e2be64ebea0be5e77446b10e9277bb745d2ddee86e5] <==
	I0417 19:45:05.228576       1 server_linux.go:69] "Using iptables proxy"
	I0417 19:45:05.259971       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.156"]
	I0417 19:45:05.316945       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 19:45:05.317080       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 19:45:05.317124       1 server_linux.go:165] "Using iptables Proxier"
	I0417 19:45:05.323432       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 19:45:05.323918       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 19:45:05.324000       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 19:45:05.325165       1 config.go:192] "Starting service config controller"
	I0417 19:45:05.325211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 19:45:05.325369       1 config.go:101] "Starting endpoint slice config controller"
	I0417 19:45:05.325476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 19:45:05.325378       1 config.go:319] "Starting node config controller"
	I0417 19:45:05.326049       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 19:45:05.427818       1 shared_informer.go:320] Caches are synced for service config
	I0417 19:45:05.428066       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 19:45:05.428177       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0a1e6dd8fc614776dd1b5263f65136f509424d9a838dbc82aa2bec36d08fa1c] <==
	I0417 19:45:04.056095       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 19:45:04.058470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0417 19:45:04.088737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.088845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.088973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0417 19:45:04.089014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0417 19:45:04.089091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0417 19:45:04.089122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0417 19:45:04.089214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.089337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.089424       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0417 19:45:04.089456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0417 19:45:04.089562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0417 19:45:04.089618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0417 19:45:04.091405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0417 19:45:04.091484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0417 19:45:04.091713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0417 19:45:04.091769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0417 19:45:04.091872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0417 19:45:04.092500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0417 19:45:04.092046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0417 19:45:04.092606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0417 19:45:04.092101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0417 19:45:04.092661       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0417 19:45:04.159359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7] <==
	I0417 19:44:39.528782       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945107    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/148dc06994f8c7b5b9f6353a4eca512b-etcd-data\") pod \"etcd-pause-646953\" (UID: \"148dc06994f8c7b5b9f6353a4eca512b\") " pod="kube-system/etcd-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945122    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-flexvolume-dir\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945139    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-k8s-certs\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945156    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36f35028cccbcd2d15942038a695af3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-646953\" (UID: \"e36f35028cccbcd2d15942038a695af3\") " pod="kube-system/kube-controller-manager-pause-646953"
	Apr 17 19:45:00 pause-646953 kubelet[3430]: I0417 19:45:00.945171    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77fe5231f0f485a3b159d32d2c47ceda-kubeconfig\") pod \"kube-scheduler-pause-646953\" (UID: \"77fe5231f0f485a3b159d32d2c47ceda\") " pod="kube-system/kube-scheduler-pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.045649    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.046649    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.156:8443: connect: connection refused" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.220607    3430 scope.go:117] "RemoveContainer" containerID="e8370f2f797f7ad33e6db2af4e57829117e3bcf906009f2a42ced321a94c33d7"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.225791    3430 scope.go:117] "RemoveContainer" containerID="79a0007ae410b48b105514aac0ef0fbaa140064c186940afb7903861a4a530b9"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.227334    3430 scope.go:117] "RemoveContainer" containerID="618429c1fd68584d89611dce13625139ff75b5a0548eca4dac235b91f8b7d1e4"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.344446    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-646953?timeout=10s\": dial tcp 192.168.61.156:8443: connect: connection refused" interval="800ms"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: I0417 19:45:01.448764    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:01 pause-646953 kubelet[3430]: E0417 19:45:01.450440    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.156:8443: connect: connection refused" node="pause-646953"
	Apr 17 19:45:02 pause-646953 kubelet[3430]: I0417 19:45:02.252016    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.196472    3430 kubelet_node_status.go:112] "Node was previously registered" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.197008    3430 kubelet_node_status.go:76] "Successfully registered node" node="pause-646953"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.200618    3430 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.203705    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.713542    3430 apiserver.go:52] "Watching apiserver"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.716874    3430 topology_manager.go:215] "Topology Admit Handler" podUID="3eec3404-cdc2-4158-88b9-df4e4203f290" podNamespace="kube-system" podName="kube-proxy-w9mzs"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.717183    3430 topology_manager.go:215] "Topology Admit Handler" podUID="de016395-52a7-4569-945d-dc21c63529c1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qrd57"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.737998    3430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.759037    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eec3404-cdc2-4158-88b9-df4e4203f290-lib-modules\") pod \"kube-proxy-w9mzs\" (UID: \"3eec3404-cdc2-4158-88b9-df4e4203f290\") " pod="kube-system/kube-proxy-w9mzs"
	Apr 17 19:45:04 pause-646953 kubelet[3430]: I0417 19:45:04.759306    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eec3404-cdc2-4158-88b9-df4e4203f290-xtables-lock\") pod \"kube-proxy-w9mzs\" (UID: \"3eec3404-cdc2-4158-88b9-df4e4203f290\") " pod="kube-system/kube-proxy-w9mzs"
	Apr 17 19:45:05 pause-646953 kubelet[3430]: I0417 19:45:05.020069    3430 scope.go:117] "RemoveContainer" containerID="1a9049c5a40d747b4a077b65e123d02d1cfc145f2b6481fa833d2cdb4f496c74"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-646953 -n pause-646953
helpers_test.go:261: (dbg) Run:  kubectl --context pause-646953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7200.076s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-233612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0417 19:54:10.227318   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/client.crt: no such file or directory
E0417 19:54:13.311262   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/auto-450558/client.crt: no such file or directory
E0417 19:54:32.609694   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/client.crt: no such file or directory
E0417 19:54:33.339741   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/flannel-450558/client.crt: no such file or directory
E0417 19:54:39.059998   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kindnet-450558/client.crt: no such file or directory
E0417 19:54:40.995844   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/auto-450558/client.crt: no such file or directory
E0417 19:54:41.347470   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/enable-default-cni-450558/client.crt: no such file or directory
E0417 19:54:51.187850   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/client.crt: no such file or directory
E0417 19:55:06.745235   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/kindnet-450558/client.crt: no such file or directory
E0417 19:55:55.261350   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/flannel-450558/client.crt: no such file or directory
E0417 19:56:10.490502   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt: no such file or directory
E0417 19:56:13.108685   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/client.crt: no such file or directory
E0417 19:56:38.174075   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/calico-450558/client.crt: no such file or directory
E0417 19:56:48.767552   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/client.crt: no such file or directory
E0417 19:56:57.502515   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/enable-default-cni-450558/client.crt: no such file or directory
E0417 19:57:16.450510   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/custom-flannel-450558/client.crt: no such file or directory
E0417 19:57:25.188973   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/enable-default-cni-450558/client.crt: no such file or directory
E0417 19:58:11.417742   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/flannel-450558/client.crt: no such file or directory
E0417 19:58:19.319347   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 19:58:29.264671   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/bridge-450558/client.crt: no such file or directory
E0417 19:58:39.101798   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/flannel-450558/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (21m44s)
	TestNetworkPlugins/group (9m58s)
	TestStartStop (16m25s)
	TestStartStop/group/default-k8s-diff-port (9m57s)
	TestStartStop/group/default-k8s-diff-port/serial (9m57s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5m19s)
	TestStartStop/group/embed-certs (10m7s)
	TestStartStop/group/embed-certs/serial (10m7s)
	TestStartStop/group/embed-certs/serial/SecondStart (6m18s)
	TestStartStop/group/no-preload (11m26s)
	TestStartStop/group/no-preload/serial (11m26s)
	TestStartStop/group/no-preload/serial/SecondStart (6m4s)
	TestStartStop/group/old-k8s-version (11m31s)
	TestStartStop/group/old-k8s-version/serial (11m31s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (4m46s)

                                                
                                                
goroutine 3368 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008b51e0, 0xc0006c5bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000136a98, {0x47f7520, 0x2b, 0x2b}, {0x25e0c00?, 0xc0006d4480?, 0x48b2200?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00089cb40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00089cb40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 22 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000722d00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2744 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2743
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2925 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000952820, {0x2594bc0?, 0x60400000004?}, 0xc00042c100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000952820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000952820, 0xc00378e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2284
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2280 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002840ea0, 0x2fece78)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1726
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2575 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027ba740, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2573
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 40 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 39
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2316 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0026b2d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2263
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 389 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc000094750, 0xc0020a1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0xa0?, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5942a5?, 0xc000c72840?, 0xc00010faa0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 388 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001fd6e50, 0x21)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020c8a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001fd6e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00070ff50, {0x353c400, 0xc000bfeff0}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00070ff50, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3359 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022749a0, 0xc0027be660)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3356
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2284 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002841520, {0x2588ed5?, 0x0?}, 0xc00378e100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002841520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002841520, 0xc0027ba200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2280
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2966 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc003754350, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002438de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc003754380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00317e380, {0x353c400, 0xc0024e0240}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00317e380, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2957
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 390 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 389
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3351 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c1235e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0027af620?, 0xc0021b49fe?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0027af620, {0xc0021b49fe, 0x1602, 0x1602})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bfb5e8, {0xc0021b49fe?, 0xc0021c2d30?, 0xfe4e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008d0b10, {0x353aea0, 0xc0006741e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0008d0b10}, {0x353aea0, 0xc0006741e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bfb5e8?, {0x353afe0, 0xc0008d0b10})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bfb5e8, {0x353afe0, 0xc0008d0b10})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0008d0b10}, {0x353af00, 0xc000bfb5e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002192300?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3349
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2281 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002841040, {0x2588ed5?, 0x0?}, 0xc00378e080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002841040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002841040, 0xc0027ba140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2280
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2970 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027ba7c0, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2987
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1688 [chan receive, 23 minutes]:
testing.(*T).Run(0xc0020f04e0, {0x2587948?, 0x55273c?}, 0xc00270a360)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020f04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020f04e0, 0x2fecc58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3243 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00283ec90, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc003339e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00283ecc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000901110, {0x353c400, 0xc0008d17d0}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000901110, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3285
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3352 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022746e0, 0xc00010fe60)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3349
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 834 [select, 70 minutes]:
net/http.(*persistConn).writeLoop(0xc0025eb560)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 831
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 213 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f354c68dea0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00089b100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00089b100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000630760)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000630760)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007b60f0, {0x3552a10, 0xc000630760})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007b60f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0020f1520?, 0xc0020f1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 210
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 376 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020c8b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2742 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00283f610, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0026b3da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00283f640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00090db30, {0x353c400, 0xc002c8e300}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00090db30, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2728
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2422 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0024d7590, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002bc7680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0024d75c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00210f780, {0x353c400, 0xc00264a690}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00210f780, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2401
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2423 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc002172f50, 0xc0020bff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0xa0?, 0xc002172f50, 0xc002172f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0x6dc83a?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5942a5?, 0xc0020f31e0?, 0xc0006bd7a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2401
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3234 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc003754e80, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3229
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 377 [chan receive, 72 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001fd6e80, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3184 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc003754e50, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc003804ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc003754e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002358a0, {0x353c400, 0xc0027fa360}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002358a0, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3234
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3284 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0000c02a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3274
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1726 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0024c64e0, {0x2587948?, 0x5525f3?}, 0x2fece78)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0024c64e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0024c64e0, 0x2fecca0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3285 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00283ecc0, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3274
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3378 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c68d018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022c3800?, 0xc0023b25c8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022c3800, {0xc0023b25c8, 0x1ba38, 0x1ba38})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00213c6a8, {0xc0023b25c8?, 0x5398a0?, 0x1fe14?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0038beba0, {0x353aea0, 0xc000674370})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0038beba0}, {0x353aea0, 0xc000674370}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00213c6a8?, {0x353afe0, 0xc0038beba0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00213c6a8, {0x353afe0, 0xc0038beba0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0038beba0}, {0x353af00, 0xc00213c6a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001aa980?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3344
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3358 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c1238d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fd3500?, 0xc00237402b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fd3500, {0xc00237402b, 0x7fd5, 0x7fd5})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bfb818, {0xc00237402b?, 0xc002173530?, 0xfe53?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008d1440, {0x353aea0, 0xc000674410})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0008d1440}, {0x353aea0, 0xc000674410}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bfb818?, {0x353afe0, 0xc0008d1440})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bfb818, {0x353afe0, 0xc0008d1440})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0008d1440}, {0x353af00, 0xc000bfb818}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002192480?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3356
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 833 [select, 70 minutes]:
net/http.(*persistConn).readLoop(0xc0025eb560)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 831
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2301 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 661 [chan send, 70 minutes]:
os/exec.(*Cmd).watchCtx(0xc002930b00, 0xc002749d40)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 317
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3356 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x2222e, 0xc0020fdab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc00276eb70)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc00276eb70)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022749a0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0022749a0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0009536c0, 0xc0022749a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x355ef50, 0xc00066c0e0}, 0xc0009536c0, {0xc00005ff50, 0x16}, {0x0?, 0xc0032f3760?}, {0x5525f3?, 0x4a280f?}, {0xc002788000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0009536c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0009536c0, 0xc00042c700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2917
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2969 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020f4fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2987
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3168 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0008b5040, {0x2594bc0?, 0x60400000004?}, 0xc0001aa980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0008b5040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0008b5040, 0xc00089ba80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2283
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2286 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002841860, {0x2588ed5?, 0x0?}, 0xc00378f100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002841860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002841860, 0xc0027ba2c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2280
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3334 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x21f80, 0xc0020aeab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0038ae3c0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0038ae3c0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00277e420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00277e420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024f01a0, 0xc00277e420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x355ef50, 0xc000484af0}, 0xc0024f01a0, {0xc0028a4d50, 0x12}, {0x0?, 0xc000505f60?}, {0x5525f3?, 0x4a280f?}, {0xc0008a6e00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0024f01a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0024f01a0, 0xc0001aa700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3116
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 396 [chan send, 72 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c73b80, 0xc00215cd20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 395
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3336 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c68d4f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022c2960?, 0xc002455757?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022c2960, {0xc002455757, 0x68a9, 0x68a9})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00213c468, {0xc002455757?, 0xc002934d30?, 0xfe35?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0038be510, {0x353aea0, 0xc000674108})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0038be510}, {0x353aea0, 0xc000674108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00213c468?, {0x353afe0, 0xc0038be510})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00213c468, {0x353afe0, 0xc0038be510})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0038be510}, {0x353af00, 0xc00213c468}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027982a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3334
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3185 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc002174750, 0xc002174798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x60?, 0xc002174750, 0xc002174798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0021747b0?, 0x99ba98?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5942a5?, 0xc0005289a0?, 0xc00010ec60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3234
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 535 [chan send, 72 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027b7b80, 0xc0027be780)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 534
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3344 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x2212c, 0xc0020acab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0038ae840)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0038ae840)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00277e840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00277e840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024f0680, 0xc00277e840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x355ef50, 0xc000480a80}, 0xc0024f0680, {0xc003760780, 0x1c}, {0x0?, 0xc0021c1f60?}, {0x5525f3?, 0x4a280f?}, {0xc0024d2d00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0024f0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0024f0680, 0xc0001aa980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3168
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2968 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2967
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2728 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00283f640, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2283 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002841380, {0x2588ed5?, 0x0?}, 0xc00089ba80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002841380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002841380, 0xc0027ba1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2280
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2726 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2725
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2956 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002438f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2955
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2594 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc002100750, 0xc002100798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x20?, 0xc002100750, 0xc002100798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0021007b0?, 0x99ba98?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021007d0?, 0x594304?, 0xc0021007a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2575
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3337 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00277e420, 0xc002798600)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3334
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1750 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc0009219f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0020f16c0, 0xc00270a360)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1688
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2595 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2594
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3250 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3185
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3244 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc00241f750, 0xc00241f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0xc0?, 0xc00241f750, 0xc00241f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0027b9720?, 0xc0027b9720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00210b3b0?, 0x594304?, 0xc0006e3d60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3285
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3026 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2993
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2299 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000afcc90, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0026b2c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000afccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027e8240, {0x353c400, 0xc0022682a0}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027e8240, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2317
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3116 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000953860, {0x2594bc0?, 0x60400000004?}, 0xc0001aa700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000953860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000953860, 0xc00378f100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2286
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2718 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000adfce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2317 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000afccc0, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2263
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2993 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc002935f50, 0xc0020b0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x80?, 0xc002935f50, 0xc002935f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc002935fb0?, 0x99ba98?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99ba45?, 0xc002789800?, 0xc002193680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2529 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0027ba710, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002499200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027ba740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027061f0, {0x353c400, 0xc0028e01e0}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027061f0, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2575
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2574 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002499320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2573
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2424 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2423
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2719 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b60c00, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2401 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0024d75c0, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2399
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3379 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00277e840, 0xc002799320)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3344
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3345 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f354c68dcb0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022c3740?, 0xc0024dfbbc?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022c3740, {0xc0024dfbbc, 0x444, 0x444})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00213c690, {0xc0024dfbbc?, 0xc002177d30?, 0x21a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0038beb40, {0x353aea0, 0xc0029a2140})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0038beb40}, {0x353aea0, 0xc0029a2140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00213c690?, {0x353afe0, 0xc0038beb40})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00213c690, {0x353afe0, 0xc0038beb40})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0038beb40}, {0x353af00, 0xc00213c690}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00215c900?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3344
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2743 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc000502f50, 0xc0020aff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x7?, 0xc000502f50, 0xc000502f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0024c76c0?, 0x552f20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000502fd0?, 0x594304?, 0xc002c8e180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2728
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2282 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0009219f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0028411e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0028411e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0028411e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0028411e0, 0xc0027ba180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2280
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3350 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f354c68d7d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0027af560?, 0xc002a402a9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0027af560, {0xc002a402a9, 0x557, 0x557})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bfb5b0, {0xc002a402a9?, 0x20f1d00?, 0x233?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008d0ae0, {0x353aea0, 0xc00213c5e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0008d0ae0}, {0x353aea0, 0xc00213c5e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bfb5b0?, {0x353afe0, 0xc0008d0ae0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bfb5b0, {0x353afe0, 0xc0008d0ae0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0008d0ae0}, {0x353af00, 0xc000bfb5b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00042c100?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3349
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3245 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3349 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x22013, 0xc0026adab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc00276e900)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc00276e900)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022746e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0022746e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000953380, 0xc0022746e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x355ef50, 0xc00066c000}, 0xc000953380, {0xc0028a4030, 0x11}, {0x0?, 0xc002176760?}, {0x5525f3?, 0x4a280f?}, {0xc0008a6600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000953380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000953380, 0xc00042c100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2925
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2724 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002b60bd0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000adfbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b60c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027e9970, {0x353c400, 0xc0008d05d0}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027e9970, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2719
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3233 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc003804cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3229
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2725 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc00241a750, 0xc00241a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x7?, 0xc00241a750, 0xc00241a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0020f1d40?, 0x552f20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00241a7d0?, 0x594304?, 0xc000af9410?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2719
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2967 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc002105750, 0xc002105798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0xb3?, 0xc002105750, 0xc002105798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc00361e5f0?, 0xc00361e5f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00280c600?, 0x0?, 0xc003783770?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2957
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2917 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0009521a0, {0x2594bc0?, 0x60400000004?}, 0xc00042c700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0009521a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0009521a0, 0xc00378e080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2281
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2992 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0027ba790, 0x11)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x209a3a0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020f4ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027ba7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000aff390, {0x353c400, 0xc00264a720}, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000aff390, 0x3b9aca00, 0x0, 0x1, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3335 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c1233f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0022c28a0?, 0xc0024632a7?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0022c28a0, {0xc0024632a7, 0x559, 0x559})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00213c3b0, {0xc0024632a7?, 0x5398a0?, 0x234?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0038be4e0, {0x353aea0, 0xc000bfb4e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0038be4e0}, {0x353aea0, 0xc000bfb4e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00213c3b0?, {0x353afe0, 0xc0038be4e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00213c3b0, {0x353afe0, 0xc0038be4e0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0038be4e0}, {0x353af00, 0xc00213c3b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001aa700?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3334
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2957 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc003754380, 0xc00010eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2955
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2300 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x355f110, 0xc00010eea0}, 0xc0021c5750, 0xc0021c5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x355f110, 0xc00010eea0}, 0x0?, 0xc0021c5750, 0xc0021c5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x355f110?, 0xc00010eea0?}, 0xc0024c7040?, 0x552f20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5942a5?, 0xc0024ec000?, 0xc0006bc600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2317
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2400 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002bc77a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2399
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2727 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0026b3ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3357 [IO wait]:
internal/poll.runtime_pollWait(0x7f354c68d6e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fd3440?, 0xc002a41aee?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fd3440, {0xc002a41aee, 0x512, 0x512})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bfb7f0, {0xc002a41aee?, 0x20f1d00?, 0x219?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008d1410, {0x353aea0, 0xc00213c7d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x353afe0, 0xc0008d1410}, {0x353aea0, 0xc00213c7d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bfb7f0?, {0x353afe0, 0xc0008d1410})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bfb7f0, {0x353afe0, 0xc0008d1410})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x353afe0, 0xc0008d1410}, {0x353af00, 0xc000bfb7f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00042c700?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3356
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    

Test pass (164/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.47
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0-rc.2/json-events 5.62
13 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
17 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.15
19 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 120.78
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
28 TestCertOptions 57.41
29 TestCertExpiration 305.99
31 TestForceSystemdFlag 105.37
32 TestForceSystemdEnv 43.24
34 TestKVMDriverInstallOrUpdate 3.59
38 TestErrorSpam/setup 45.82
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.77
41 TestErrorSpam/pause 1.62
42 TestErrorSpam/unpause 1.76
43 TestErrorSpam/stop 5.72
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 57.94
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 374.31
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
55 TestFunctional/serial/CacheCmd/cache/add_local 2.02
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
60 TestFunctional/serial/CacheCmd/cache/delete 0.12
61 TestFunctional/serial/MinikubeKubectlCmd 0.12
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
63 TestFunctional/serial/ExtraConfig 39.42
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.7
66 TestFunctional/serial/LogsFileCmd 1.68
67 TestFunctional/serial/InvalidService 4.4
69 TestFunctional/parallel/ConfigCmd 0.44
70 TestFunctional/parallel/DashboardCmd 30.24
71 TestFunctional/parallel/DryRun 0.33
72 TestFunctional/parallel/InternationalLanguage 0.19
73 TestFunctional/parallel/StatusCmd 1.07
77 TestFunctional/parallel/ServiceCmdConnect 12.59
78 TestFunctional/parallel/AddonsCmd 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 47.01
81 TestFunctional/parallel/SSHCmd 0.47
82 TestFunctional/parallel/CpCmd 1.46
83 TestFunctional/parallel/MySQL 31.93
84 TestFunctional/parallel/FileSync 0.24
85 TestFunctional/parallel/CertSync 1.64
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
93 TestFunctional/parallel/License 0.19
94 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
95 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
96 TestFunctional/parallel/ProfileCmd/profile_list 0.37
97 TestFunctional/parallel/MountCmd/any-port 7.96
98 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
103 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
104 TestFunctional/parallel/ImageCommands/Setup 1.35
105 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.47
106 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.8
107 TestFunctional/parallel/MountCmd/specific-port 1.97
108 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.07
109 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
110 TestFunctional/parallel/ServiceCmd/List 0.34
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
113 TestFunctional/parallel/ServiceCmd/Format 0.42
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.7
116 TestFunctional/parallel/ServiceCmd/URL 0.39
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.38
118 TestFunctional/parallel/ImageCommands/ImageRemove 1.4
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.52
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.2
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 207.14
140 TestMultiControlPlane/serial/DeployApp 5.68
141 TestMultiControlPlane/serial/PingHostFromPods 1.41
142 TestMultiControlPlane/serial/AddWorkerNode 45.04
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
145 TestMultiControlPlane/serial/CopyFile 13.74
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.5
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
154 TestMultiControlPlane/serial/RestartCluster 362.46
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
156 TestMultiControlPlane/serial/AddSecondaryNode 75.98
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
161 TestJSONOutput/start/Command 99.06
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.73
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.67
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.35
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.21
189 TestMainNoArgs 0.06
190 TestMinikubeProfile 91.33
193 TestMountStart/serial/StartWithMountFirst 28.15
194 TestMountStart/serial/VerifyMountFirst 0.4
195 TestMountStart/serial/StartWithMountSecond 25.18
196 TestMountStart/serial/VerifyMountSecond 0.4
197 TestMountStart/serial/DeleteFirst 0.68
198 TestMountStart/serial/VerifyMountPostDelete 0.42
199 TestMountStart/serial/Stop 1.41
200 TestMountStart/serial/RestartStopped 23.4
201 TestMountStart/serial/VerifyMountPostStop 0.39
204 TestMultiNode/serial/FreshStart2Nodes 100.34
205 TestMultiNode/serial/DeployApp2Nodes 4.53
206 TestMultiNode/serial/PingHostFrom2Pods 0.87
207 TestMultiNode/serial/AddNode 40.01
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.23
210 TestMultiNode/serial/CopyFile 7.51
211 TestMultiNode/serial/StopNode 2.4
212 TestMultiNode/serial/StartAfterStop 29.78
214 TestMultiNode/serial/DeleteNode 2.29
216 TestMultiNode/serial/RestartMultiNode 171.39
217 TestMultiNode/serial/ValidateNameConflict 47.32
224 TestScheduledStopUnix 116.04
228 TestRunningBinaryUpgrade 195.22
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
237 TestNoKubernetes/serial/StartWithK8s 89.83
246 TestNoKubernetes/serial/StartWithStopK8s 41.09
247 TestNoKubernetes/serial/Start 49.9
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
249 TestNoKubernetes/serial/ProfileList 0.83
250 TestNoKubernetes/serial/Stop 1.34
251 TestNoKubernetes/serial/StartNoArgs 67.16
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
253 TestStoppedBinaryUpgrade/Setup 0.83
254 TestStoppedBinaryUpgrade/Upgrade 96.71
263 TestPause/serial/Start 111.67
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
x
+
TestDownloadOnly/v1.20.0/json-events (11.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-354848 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-354848 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.465356256s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-354848
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-354848: exit status 85 (77.683262ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-354848 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:58 UTC |          |
	|         | -p download-only-354848        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 17:58:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 17:58:55.382460   83219 out.go:291] Setting OutFile to fd 1 ...
	I0417 17:58:55.382579   83219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:58:55.382588   83219 out.go:304] Setting ErrFile to fd 2...
	I0417 17:58:55.382592   83219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:58:55.382834   83219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	W0417 17:58:55.382968   83219 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18665-75973/.minikube/config/config.json: open /home/jenkins/minikube-integration/18665-75973/.minikube/config/config.json: no such file or directory
	I0417 17:58:55.383561   83219 out.go:298] Setting JSON to true
	I0417 17:58:55.384505   83219 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6083,"bootTime":1713370652,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 17:58:55.384574   83219 start.go:139] virtualization: kvm guest
	I0417 17:58:55.387126   83219 out.go:97] [download-only-354848] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 17:58:55.388806   83219 out.go:169] MINIKUBE_LOCATION=18665
	W0417 17:58:55.387281   83219 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball: no such file or directory
	I0417 17:58:55.387331   83219 notify.go:220] Checking for updates...
	I0417 17:58:55.391738   83219 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 17:58:55.393214   83219 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 17:58:55.394498   83219 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 17:58:55.395907   83219 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0417 17:58:55.398228   83219 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 17:58:55.398606   83219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 17:58:55.433883   83219 out.go:97] Using the kvm2 driver based on user configuration
	I0417 17:58:55.433911   83219 start.go:297] selected driver: kvm2
	I0417 17:58:55.433919   83219 start.go:901] validating driver "kvm2" against <nil>
	I0417 17:58:55.434297   83219 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:58:55.434400   83219 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 17:58:55.450374   83219 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 17:58:55.450448   83219 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 17:58:55.451129   83219 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0417 17:58:55.451326   83219 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 17:58:55.451437   83219 cni.go:84] Creating CNI manager for ""
	I0417 17:58:55.451457   83219 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 17:58:55.451469   83219 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 17:58:55.451535   83219 start.go:340] cluster config:
	{Name:download-only-354848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-354848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 17:58:55.451766   83219 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:58:55.453818   83219 out.go:97] Downloading VM boot image ...
	I0417 17:58:55.453867   83219 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 17:58:58.066981   83219 out.go:97] Starting "download-only-354848" primary control-plane node in "download-only-354848" cluster
	I0417 17:58:58.067006   83219 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 17:58:58.090044   83219 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0417 17:58:58.090089   83219 cache.go:56] Caching tarball of preloaded images
	I0417 17:58:58.090276   83219 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0417 17:58:58.092081   83219 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0417 17:58:58.092114   83219 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0417 17:58:58.122579   83219 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0417 17:59:05.292283   83219 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0417 17:59:05.292384   83219 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-354848 host does not exist
	  To start a cluster, run: "minikube start -p download-only-354848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-354848
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (5.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-772573 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-772573 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.615821488s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (5.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-772573
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-772573: exit status 85 (72.471016ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-354848 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:58 UTC |                     |
	|         | -p download-only-354848           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:59 UTC | 17 Apr 24 17:59 UTC |
	| delete  | -p download-only-354848           | download-only-354848 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:59 UTC | 17 Apr 24 17:59 UTC |
	| start   | -o=json --download-only           | download-only-772573 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:59 UTC |                     |
	|         | -p download-only-772573           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 17:59:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 17:59:07.201549   83395 out.go:291] Setting OutFile to fd 1 ...
	I0417 17:59:07.201659   83395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:59:07.201670   83395 out.go:304] Setting ErrFile to fd 2...
	I0417 17:59:07.201675   83395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:59:07.201886   83395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 17:59:07.202457   83395 out.go:298] Setting JSON to true
	I0417 17:59:07.203303   83395 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6095,"bootTime":1713370652,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 17:59:07.203371   83395 start.go:139] virtualization: kvm guest
	I0417 17:59:07.205773   83395 out.go:97] [download-only-772573] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 17:59:07.205918   83395 notify.go:220] Checking for updates...
	I0417 17:59:07.207603   83395 out.go:169] MINIKUBE_LOCATION=18665
	I0417 17:59:07.209241   83395 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 17:59:07.210556   83395 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 17:59:07.211917   83395 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 17:59:07.213386   83395 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0417 17:59:07.216259   83395 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 17:59:07.216517   83395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 17:59:07.248503   83395 out.go:97] Using the kvm2 driver based on user configuration
	I0417 17:59:07.248527   83395 start.go:297] selected driver: kvm2
	I0417 17:59:07.248533   83395 start.go:901] validating driver "kvm2" against <nil>
	I0417 17:59:07.248917   83395 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:59:07.249002   83395 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 17:59:07.264893   83395 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 17:59:07.264948   83395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 17:59:07.265446   83395 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0417 17:59:07.265603   83395 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 17:59:07.265683   83395 cni.go:84] Creating CNI manager for ""
	I0417 17:59:07.265696   83395 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0417 17:59:07.265704   83395 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 17:59:07.265770   83395 start.go:340] cluster config:
	{Name:download-only-772573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-772573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 17:59:07.265865   83395 iso.go:125] acquiring lock: {Name:mkb4c46d9d11025607234f0b50f29e48600415ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:59:07.267687   83395 out.go:97] Starting "download-only-772573" primary control-plane node in "download-only-772573" cluster
	I0417 17:59:07.267708   83395 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 17:59:07.294168   83395 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 17:59:07.294233   83395 cache.go:56] Caching tarball of preloaded images
	I0417 17:59:07.294424   83395 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 17:59:07.296434   83395 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0417 17:59:07.296461   83395 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0417 17:59:07.320038   83395 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:3f21ab668c1533072cd1f73a92db63f3 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0417 17:59:09.953265   83395 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0417 17:59:09.953369   83395 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18665-75973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0417 17:59:10.793099   83395 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0417 17:59:10.793431   83395 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/download-only-772573/config.json ...
	I0417 17:59:10.793459   83395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/download-only-772573/config.json: {Name:mk186a45e23171c0eb6756b3848fbe3e0ad17f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:59:10.793615   83395 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0417 17:59:10.793741   83395 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18665-75973/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-772573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-772573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-772573
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-489656 --alsologtostderr --binary-mirror http://127.0.0.1:40735 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-489656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-489656
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (120.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-680425 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-680425 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m59.668654002s)
helpers_test.go:175: Cleaning up "offline-crio-680425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-680425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-680425: (1.115755771s)
--- PASS: TestOffline (120.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-221213
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-221213: exit status 85 (73.5879ms)

                                                
                                                
-- stdout --
	* Profile "addons-221213" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-221213"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-221213
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-221213: exit status 85 (72.980201ms)

                                                
                                                
-- stdout --
	* Profile "addons-221213" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-221213"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestCertOptions (57.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-120912 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-120912 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (56.185110788s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-120912 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-120912 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-120912 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-120912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-120912
--- PASS: TestCertOptions (57.41s)

                                                
                                    
x
+
TestCertExpiration (305.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362714 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0417 19:38:19.319255   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362714 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m35.300248466s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362714 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362714 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.639030756s)
helpers_test.go:175: Cleaning up "cert-expiration-362714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-362714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-362714: (1.04615426s)
--- PASS: TestCertExpiration (305.99s)

                                                
                                    
x
+
TestForceSystemdFlag (105.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-239125 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-239125 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m44.325538812s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-239125 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-239125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-239125
--- PASS: TestForceSystemdFlag (105.37s)

                                                
                                    
x
+
TestForceSystemdEnv (43.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-747557 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-747557 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.471253947s)
helpers_test.go:175: Cleaning up "force-systemd-env-747557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-747557
--- PASS: TestForceSystemdEnv (43.24s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                    
x
+
TestErrorSpam/setup (45.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-999226 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-999226 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-999226 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-999226 --driver=kvm2  --container-runtime=crio: (45.815166292s)
--- PASS: TestErrorSpam/setup (45.82s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop: (2.311230631s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop: (1.371715286s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-999226 --log_dir /tmp/nospam-999226 stop: (2.033488906s)
--- PASS: TestErrorSpam/stop (5.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18665-75973/.minikube/files/etc/test/nested/copy/83207/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-169848 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.936549638s)
--- PASS: TestFunctional/serial/StartWithProxy (57.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (374.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-169848 --alsologtostderr -v=8: (6m14.307964581s)
functional_test.go:659: soft start took 6m14.309111497s for "functional-169848" cluster.
--- PASS: TestFunctional/serial/SoftStart (374.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-169848 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:3.1: (1.039220852s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:3.3: (1.05198861s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 cache add registry.k8s.io/pause:latest: (1.054760526s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-169848 /tmp/TestFunctionalserialCacheCmdcacheadd_local2875223568/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache add minikube-local-cache-test:functional-169848
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 cache add minikube-local-cache-test:functional-169848: (1.62139733s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache delete minikube-local-cache-test:functional-169848
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-169848
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.976019ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 kubectl -- --context functional-169848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-169848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-169848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.422470015s)
functional_test.go:757: restart took 39.422593313s for "functional-169848" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-169848 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 logs: (1.704179937s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 logs --file /tmp/TestFunctionalserialLogsFileCmd988472080/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 logs --file /tmp/TestFunctionalserialLogsFileCmd988472080/001/logs.txt: (1.676660078s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-169848 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-169848
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-169848: exit status 115 (302.190554ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.38:31390 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-169848 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 config get cpus: exit status 14 (79.457228ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 config get cpus: exit status 14 (72.192714ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169848 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169848 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 95147: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (167.627253ms)

                                                
                                                
-- stdout --
	* [functional-169848] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:48:21.349718   93689 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:48:21.350138   93689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:48:21.350155   93689 out.go:304] Setting ErrFile to fd 2...
	I0417 18:48:21.350162   93689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:48:21.350635   93689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:48:21.351489   93689 out.go:298] Setting JSON to false
	I0417 18:48:21.352796   93689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9049,"bootTime":1713370652,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:48:21.352893   93689 start.go:139] virtualization: kvm guest
	I0417 18:48:21.355261   93689 out.go:177] * [functional-169848] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:48:21.356923   93689 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:48:21.356969   93689 notify.go:220] Checking for updates...
	I0417 18:48:21.359917   93689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:48:21.361370   93689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:48:21.362882   93689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:48:21.364296   93689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:48:21.365532   93689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:48:21.367407   93689 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:48:21.368021   93689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:48:21.368102   93689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:48:21.384157   93689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0417 18:48:21.384621   93689 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:48:21.385321   93689 main.go:141] libmachine: Using API Version  1
	I0417 18:48:21.385361   93689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:48:21.385826   93689 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:48:21.386053   93689 main.go:141] libmachine: (functional-169848) Calling .DriverName
	I0417 18:48:21.386358   93689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:48:21.386657   93689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:48:21.386708   93689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:48:21.403013   93689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0417 18:48:21.403475   93689 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:48:21.404045   93689 main.go:141] libmachine: Using API Version  1
	I0417 18:48:21.404073   93689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:48:21.404474   93689 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:48:21.404694   93689 main.go:141] libmachine: (functional-169848) Calling .DriverName
	I0417 18:48:21.439898   93689 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 18:48:21.441302   93689 start.go:297] selected driver: kvm2
	I0417 18:48:21.441328   93689 start.go:901] validating driver "kvm2" against &{Name:functional-169848 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-169848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:48:21.441445   93689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:48:21.443528   93689 out.go:177] 
	W0417 18:48:21.444984   93689 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0417 18:48:21.446257   93689 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169848 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (188.524452ms)

                                                
                                                
-- stdout --
	* [functional-169848] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:48:21.686084   93747 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:48:21.686263   93747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:48:21.686278   93747 out.go:304] Setting ErrFile to fd 2...
	I0417 18:48:21.686286   93747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:48:21.686776   93747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 18:48:21.687632   93747 out.go:298] Setting JSON to false
	I0417 18:48:21.689247   93747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9050,"bootTime":1713370652,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:48:21.689355   93747 start.go:139] virtualization: kvm guest
	I0417 18:48:21.692000   93747 out.go:177] * [functional-169848] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0417 18:48:21.693543   93747 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:48:21.695130   93747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:48:21.693634   93747 notify.go:220] Checking for updates...
	I0417 18:48:21.697992   93747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	I0417 18:48:21.699539   93747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	I0417 18:48:21.700948   93747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:48:21.702429   93747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:48:21.704581   93747 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 18:48:21.705287   93747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:48:21.705352   93747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:48:21.720432   93747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0417 18:48:21.721014   93747 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:48:21.721745   93747 main.go:141] libmachine: Using API Version  1
	I0417 18:48:21.721776   93747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:48:21.722185   93747 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:48:21.722396   93747 main.go:141] libmachine: (functional-169848) Calling .DriverName
	I0417 18:48:21.722666   93747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:48:21.723095   93747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 18:48:21.723140   93747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:48:21.744591   93747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0417 18:48:21.745519   93747 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:48:21.746132   93747 main.go:141] libmachine: Using API Version  1
	I0417 18:48:21.746160   93747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:48:21.746581   93747 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:48:21.746894   93747 main.go:141] libmachine: (functional-169848) Calling .DriverName
	I0417 18:48:21.784193   93747 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0417 18:48:21.785704   93747 start.go:297] selected driver: kvm2
	I0417 18:48:21.785725   93747 start.go:901] validating driver "kvm2" against &{Name:functional-169848 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-169848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:48:21.785892   93747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:48:21.788493   93747 out.go:177] 
	W0417 18:48:21.790053   93747 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0417 18:48:21.791572   93747 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-169848 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-169848 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2m2fj" [c16b6567-ab88-45c7-96ea-7f991d718662] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2m2fj" [c16b6567-ab88-45c7-96ea-7f991d718662] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005282741s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.38:32015
functional_test.go:1671: http://192.168.39.38:32015: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2m2fj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.38:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.38:32015
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e4805ecb-2d80-482f-9487-290bac5c6cd1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004382849s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-169848 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-169848 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-169848 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-169848 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-169848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a8ba3834-4337-407a-a36c-41af9309a715] Pending
helpers_test.go:344: "sp-pod" [a8ba3834-4337-407a-a36c-41af9309a715] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a8ba3834-4337-407a-a36c-41af9309a715] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.006306565s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-169848 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-169848 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-169848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5779d0c8-08c0-4288-aedc-954e1eb10669] Pending
helpers_test.go:344: "sp-pod" [5779d0c8-08c0-4288-aedc-954e1eb10669] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5779d0c8-08c0-4288-aedc-954e1eb10669] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005052541s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-169848 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh -n functional-169848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cp functional-169848:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3944425607/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh -n functional-169848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh -n functional-169848 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-169848 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-plhkc" [1f06a916-6165-4c2d-85b7-3f41ecb42b58] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-plhkc" [1f06a916-6165-4c2d-85b7-3f41ecb42b58] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.005443494s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-169848 exec mysql-64454c8b5c-plhkc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-169848 exec mysql-64454c8b5c-plhkc -- mysql -ppassword -e "show databases;": exit status 1 (155.243614ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-169848 exec mysql-64454c8b5c-plhkc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-169848 exec mysql-64454c8b5c-plhkc -- mysql -ppassword -e "show databases;": exit status 1 (655.003897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/04/17 18:49:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-169848 exec mysql-64454c8b5c-plhkc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/83207/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /etc/test/nested/copy/83207/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/83207.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /etc/ssl/certs/83207.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/83207.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /usr/share/ca-certificates/83207.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/832072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /etc/ssl/certs/832072.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/832072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /usr/share/ca-certificates/832072.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-169848 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "sudo systemctl is-active docker": exit status 1 (266.284348ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "sudo systemctl is-active containerd": exit status 1 (254.178888ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-169848 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-169848 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-dqfwb" [8d2c9a7b-a3b8-4d4e-bce4-c4b61afd2fa0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-dqfwb" [8d2c9a7b-a3b8-4d4e-bce4-c4b61afd2fa0] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00643184s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "289.384028ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "79.049905ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdany-port4262304626/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713379700743630945" to /tmp/TestFunctionalparallelMountCmdany-port4262304626/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713379700743630945" to /tmp/TestFunctionalparallelMountCmdany-port4262304626/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713379700743630945" to /tmp/TestFunctionalparallelMountCmdany-port4262304626/001/test-1713379700743630945
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.755823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 17 18:48 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 17 18:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 17 18:48 test-1713379700743630945
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh cat /mount-9p/test-1713379700743630945
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-169848 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [df1a53d8-8ce0-4270-843a-91a003fe1881] Pending
helpers_test.go:344: "busybox-mount" [df1a53d8-8ce0-4270-843a-91a003fe1881] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [df1a53d8-8ce0-4270-843a-91a003fe1881] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [df1a53d8-8ce0-4270-843a-91a003fe1881] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00384993s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-169848 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdany-port4262304626/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "296.394183ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "68.355317ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169848 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0-rc.2
registry.k8s.io/kube-proxy:v1.30.0-rc.2
registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
registry.k8s.io/kube-apiserver:v1.30.0-rc.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-169848
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-169848
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169848 image ls --format short --alsologtostderr:
I0417 18:48:51.040403   95503 out.go:291] Setting OutFile to fd 1 ...
I0417 18:48:51.040527   95503 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.040537   95503 out.go:304] Setting ErrFile to fd 2...
I0417 18:48:51.040541   95503 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.040797   95503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
I0417 18:48:51.041442   95503 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.041550   95503 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.041960   95503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.042021   95503 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.057999   95503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
I0417 18:48:51.058575   95503 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.059235   95503 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.059271   95503 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.059734   95503 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.059989   95503 main.go:141] libmachine: (functional-169848) Calling .GetState
I0417 18:48:51.062153   95503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.062208   95503 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.077944   95503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
I0417 18:48:51.078481   95503 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.079012   95503 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.079039   95503 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.079351   95503 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.079530   95503 main.go:141] libmachine: (functional-169848) Calling .DriverName
I0417 18:48:51.079743   95503 ssh_runner.go:195] Run: systemctl --version
I0417 18:48:51.079771   95503 main.go:141] libmachine: (functional-169848) Calling .GetSSHHostname
I0417 18:48:51.082450   95503 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.082836   95503 main.go:141] libmachine: (functional-169848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:a7", ip: ""} in network mk-functional-169848: {Iface:virbr1 ExpiryTime:2024-04-17 19:40:26 +0000 UTC Type:0 Mac:52:54:00:e4:89:a7 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-169848 Clientid:01:52:54:00:e4:89:a7}
I0417 18:48:51.082860   95503 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.083113   95503 main.go:141] libmachine: (functional-169848) Calling .GetSSHPort
I0417 18:48:51.083283   95503 main.go:141] libmachine: (functional-169848) Calling .GetSSHKeyPath
I0417 18:48:51.083435   95503 main.go:141] libmachine: (functional-169848) Calling .GetSSHUsername
I0417 18:48:51.083575   95503 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/functional-169848/id_rsa Username:docker}
I0417 18:48:51.207716   95503 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:48:51.283057   95503 main.go:141] libmachine: Making call to close driver server
I0417 18:48:51.283078   95503 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:51.283430   95503 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:51.283463   95503 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:51.283474   95503 main.go:141] libmachine: Making call to close driver server
I0417 18:48:51.283482   95503 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:51.283435   95503 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
I0417 18:48:51.283737   95503 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:51.283757   95503 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:51.283784   95503 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169848 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0-rc.2       | 461015b94df4b | 63MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-169848  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.0-rc.2       | 35c7fe5cdbee5 | 85.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-169848  | 7fa6d8fcaa792 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.0-rc.2       | ae2ef7918948c | 112MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-169848  | 0988e7b1d358b | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.30.0-rc.2       | 65a750108e0b6 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169848 image ls --format table --alsologtostderr:
I0417 18:48:55.317076   95663 out.go:291] Setting OutFile to fd 1 ...
I0417 18:48:55.317239   95663 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:55.317257   95663 out.go:304] Setting ErrFile to fd 2...
I0417 18:48:55.317263   95663 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:55.317557   95663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
I0417 18:48:55.318395   95663 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:55.318552   95663 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:55.319150   95663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:55.319231   95663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:55.334317   95663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
I0417 18:48:55.334903   95663 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:55.335616   95663 main.go:141] libmachine: Using API Version  1
I0417 18:48:55.335654   95663 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:55.336001   95663 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:55.336274   95663 main.go:141] libmachine: (functional-169848) Calling .GetState
I0417 18:48:55.338426   95663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:55.338479   95663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:55.353972   95663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
I0417 18:48:55.354475   95663 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:55.355049   95663 main.go:141] libmachine: Using API Version  1
I0417 18:48:55.355076   95663 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:55.355440   95663 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:55.355628   95663 main.go:141] libmachine: (functional-169848) Calling .DriverName
I0417 18:48:55.355848   95663 ssh_runner.go:195] Run: systemctl --version
I0417 18:48:55.355880   95663 main.go:141] libmachine: (functional-169848) Calling .GetSSHHostname
I0417 18:48:55.358834   95663 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:55.359309   95663 main.go:141] libmachine: (functional-169848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:a7", ip: ""} in network mk-functional-169848: {Iface:virbr1 ExpiryTime:2024-04-17 19:40:26 +0000 UTC Type:0 Mac:52:54:00:e4:89:a7 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-169848 Clientid:01:52:54:00:e4:89:a7}
I0417 18:48:55.359350   95663 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:55.359464   95663 main.go:141] libmachine: (functional-169848) Calling .GetSSHPort
I0417 18:48:55.359664   95663 main.go:141] libmachine: (functional-169848) Calling .GetSSHKeyPath
I0417 18:48:55.359843   95663 main.go:141] libmachine: (functional-169848) Calling .GetSSHUsername
I0417 18:48:55.360014   95663 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/functional-169848/id_rsa Username:docker}
I0417 18:48:55.482787   95663 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:48:55.553646   95663 main.go:141] libmachine: Making call to close driver server
I0417 18:48:55.553667   95663 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:55.554005   95663 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
I0417 18:48:55.554027   95663 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:55.554041   95663 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:55.554050   95663 main.go:141] libmachine: Making call to close driver server
I0417 18:48:55.554059   95663 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:55.554314   95663 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:55.554339   95663 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169848 image ls --format json --alsologtostderr:
[{"id":"461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543","registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0-rc.2"],"size":"63026500"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8","registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2"],"repoTags":["reg
istry.k8s.io/kube-controller-manager:v1.30.0-rc.2"],"size":"112170310"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-169848"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7fa6d8fcaa792328e0242a10b5db56cb659eaf6fa8781783f96da1375f55ae7f","repoDigests":["localhost/minikube-local-cache-test@sha256:e2a9eb73d9caf56d6b505522ff7108a50b5a66b6b1245e310f2a48eb8b80edfe"],"repoTags":["localhost/minikube-local-cache-test:functional-169848"],"size":"3330"},{"id":"da
86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053","registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0-rc.2"],"size":"117609952"},{"id":"e6f1816883972d4be47bd48879a089
19b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1ffaa7bf9329df08d4f8f2431e7aa77ec9a4381aba6350234742d290561a976a","repoDigests":["docker.io/library/5f15f711ee9b9cbdcafbd
551993435c0d6edda18613183b6817edc23fa6545ea-tmp@sha256:8e2b568cc4882935b3efc41cf8fbcb1b1c11f31336889cb7808e28ae3741707f"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0988e7b1d358b94289fa10c44270adc0d1385edd49db8d12ea6817ed4661ca21","repoDigests":["localhost/my-image@sha256:3a2a0b2d4647b34be9597cead374405d3fb6ec3bbccd82b20a08edb7ab79441a"],"repoTags":["localhost/my-image:functional-169848"],"size":"1468600"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b9
3c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e","repoDigests":["registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5","registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0-rc.2"],"size":"85932953"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169848 image ls --format json --alsologtostderr:
I0417 18:48:55.042126   95639 out.go:291] Setting OutFile to fd 1 ...
I0417 18:48:55.042382   95639 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:55.042392   95639 out.go:304] Setting ErrFile to fd 2...
I0417 18:48:55.042397   95639 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:55.042646   95639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
I0417 18:48:55.043254   95639 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:55.043369   95639 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:55.043802   95639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:55.043870   95639 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:55.058910   95639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
I0417 18:48:55.059364   95639 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:55.060008   95639 main.go:141] libmachine: Using API Version  1
I0417 18:48:55.060041   95639 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:55.060385   95639 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:55.060586   95639 main.go:141] libmachine: (functional-169848) Calling .GetState
I0417 18:48:55.062407   95639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:55.062450   95639 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:55.076882   95639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
I0417 18:48:55.077407   95639 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:55.077942   95639 main.go:141] libmachine: Using API Version  1
I0417 18:48:55.077971   95639 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:55.078298   95639 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:55.078455   95639 main.go:141] libmachine: (functional-169848) Calling .DriverName
I0417 18:48:55.078682   95639 ssh_runner.go:195] Run: systemctl --version
I0417 18:48:55.078709   95639 main.go:141] libmachine: (functional-169848) Calling .GetSSHHostname
I0417 18:48:55.081400   95639 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:55.081897   95639 main.go:141] libmachine: (functional-169848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:a7", ip: ""} in network mk-functional-169848: {Iface:virbr1 ExpiryTime:2024-04-17 19:40:26 +0000 UTC Type:0 Mac:52:54:00:e4:89:a7 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-169848 Clientid:01:52:54:00:e4:89:a7}
I0417 18:48:55.081931   95639 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:55.082113   95639 main.go:141] libmachine: (functional-169848) Calling .GetSSHPort
I0417 18:48:55.082287   95639 main.go:141] libmachine: (functional-169848) Calling .GetSSHKeyPath
I0417 18:48:55.082445   95639 main.go:141] libmachine: (functional-169848) Calling .GetSSHUsername
I0417 18:48:55.082586   95639 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/functional-169848/id_rsa Username:docker}
I0417 18:48:55.181202   95639 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:48:55.238764   95639 main.go:141] libmachine: Making call to close driver server
I0417 18:48:55.238783   95639 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:55.239082   95639 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:55.239150   95639 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:55.239165   95639 main.go:141] libmachine: Making call to close driver server
I0417 18:48:55.239173   95639 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:55.239458   95639 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
I0417 18:48:55.239482   95639 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:55.239508   95639 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169848 image ls --format yaml --alsologtostderr:
- id: 7fa6d8fcaa792328e0242a10b5db56cb659eaf6fa8781783f96da1375f55ae7f
repoDigests:
- localhost/minikube-local-cache-test@sha256:e2a9eb73d9caf56d6b505522ff7108a50b5a66b6b1245e310f2a48eb8b80edfe
repoTags:
- localhost/minikube-local-cache-test:functional-169848
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5
- registry.k8s.io/kube-proxy@sha256:1b9a4721b83e88882bc722d76a501c4c8d6d2c3b9a1bec7573e5d521d538f86d
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0-rc.2
size: "85932953"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-169848
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8
- registry.k8s.io/kube-controller-manager@sha256:d9fcf6b51a3159ddf5312598031d7e546aac64e6c45af1664362cb6556c8a6a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
size: "112170310"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543
- registry.k8s.io/kube-scheduler@sha256:415a6892729a92b8ea4a48f957269e92f200515dfac069853d781ea010b87216
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0-rc.2
size: "63026500"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053
- registry.k8s.io/kube-apiserver@sha256:e0629e36bd9583e862c127b5fe37eb7353dda7af7d0b6281b19fe3c3c3c23e9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0-rc.2
size: "117609952"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169848 image ls --format yaml --alsologtostderr:
I0417 18:48:51.351872   95527 out.go:291] Setting OutFile to fd 1 ...
I0417 18:48:51.352002   95527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.352011   95527 out.go:304] Setting ErrFile to fd 2...
I0417 18:48:51.352015   95527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.352217   95527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
I0417 18:48:51.352800   95527 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.352891   95527 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.353294   95527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.353337   95527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.368863   95527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
I0417 18:48:51.369405   95527 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.370096   95527 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.370124   95527 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.370544   95527 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.370745   95527 main.go:141] libmachine: (functional-169848) Calling .GetState
I0417 18:48:51.372574   95527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.372613   95527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.387800   95527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
I0417 18:48:51.388208   95527 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.388728   95527 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.388754   95527 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.389080   95527 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.389325   95527 main.go:141] libmachine: (functional-169848) Calling .DriverName
I0417 18:48:51.389561   95527 ssh_runner.go:195] Run: systemctl --version
I0417 18:48:51.389595   95527 main.go:141] libmachine: (functional-169848) Calling .GetSSHHostname
I0417 18:48:51.392627   95527 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.393122   95527 main.go:141] libmachine: (functional-169848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:a7", ip: ""} in network mk-functional-169848: {Iface:virbr1 ExpiryTime:2024-04-17 19:40:26 +0000 UTC Type:0 Mac:52:54:00:e4:89:a7 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-169848 Clientid:01:52:54:00:e4:89:a7}
I0417 18:48:51.393177   95527 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.393326   95527 main.go:141] libmachine: (functional-169848) Calling .GetSSHPort
I0417 18:48:51.393543   95527 main.go:141] libmachine: (functional-169848) Calling .GetSSHKeyPath
I0417 18:48:51.393691   95527 main.go:141] libmachine: (functional-169848) Calling .GetSSHUsername
I0417 18:48:51.393805   95527 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/functional-169848/id_rsa Username:docker}
I0417 18:48:51.475791   95527 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:48:51.517348   95527 main.go:141] libmachine: Making call to close driver server
I0417 18:48:51.517373   95527 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:51.517693   95527 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:51.517728   95527 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:51.517740   95527 main.go:141] libmachine: Making call to close driver server
I0417 18:48:51.517749   95527 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:51.518112   95527 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
I0417 18:48:51.518166   95527 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:51.518180   95527 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh pgrep buildkitd: exit status 1 (213.747343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image build -t localhost/my-image:functional-169848 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image build -t localhost/my-image:functional-169848 testdata/build --alsologtostderr: (2.724646782s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169848 image build -t localhost/my-image:functional-169848 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1ffaa7bf932
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-169848
--> 0988e7b1d35
Successfully tagged localhost/my-image:functional-169848
0988e7b1d358b94289fa10c44270adc0d1385edd49db8d12ea6817ed4661ca21
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169848 image build -t localhost/my-image:functional-169848 testdata/build --alsologtostderr:
I0417 18:48:51.873208   95581 out.go:291] Setting OutFile to fd 1 ...
I0417 18:48:51.873353   95581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.873362   95581 out.go:304] Setting ErrFile to fd 2...
I0417 18:48:51.873367   95581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:48:51.873555   95581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
I0417 18:48:51.874110   95581 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.874876   95581 config.go:182] Loaded profile config "functional-169848": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
I0417 18:48:51.875505   95581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.875576   95581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.891593   95581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
I0417 18:48:51.892079   95581 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.892862   95581 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.892892   95581 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.893297   95581 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.893508   95581 main.go:141] libmachine: (functional-169848) Calling .GetState
I0417 18:48:51.895839   95581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0417 18:48:51.895887   95581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:48:51.912186   95581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
I0417 18:48:51.912682   95581 main.go:141] libmachine: () Calling .GetVersion
I0417 18:48:51.913305   95581 main.go:141] libmachine: Using API Version  1
I0417 18:48:51.913342   95581 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:48:51.913766   95581 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:48:51.913980   95581 main.go:141] libmachine: (functional-169848) Calling .DriverName
I0417 18:48:51.914216   95581 ssh_runner.go:195] Run: systemctl --version
I0417 18:48:51.914245   95581 main.go:141] libmachine: (functional-169848) Calling .GetSSHHostname
I0417 18:48:51.917219   95581 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.917667   95581 main.go:141] libmachine: (functional-169848) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:a7", ip: ""} in network mk-functional-169848: {Iface:virbr1 ExpiryTime:2024-04-17 19:40:26 +0000 UTC Type:0 Mac:52:54:00:e4:89:a7 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-169848 Clientid:01:52:54:00:e4:89:a7}
I0417 18:48:51.917709   95581 main.go:141] libmachine: (functional-169848) DBG | domain functional-169848 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:89:a7 in network mk-functional-169848
I0417 18:48:51.917877   95581 main.go:141] libmachine: (functional-169848) Calling .GetSSHPort
I0417 18:48:51.918068   95581 main.go:141] libmachine: (functional-169848) Calling .GetSSHKeyPath
I0417 18:48:51.918275   95581 main.go:141] libmachine: (functional-169848) Calling .GetSSHUsername
I0417 18:48:51.918437   95581 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/functional-169848/id_rsa Username:docker}
I0417 18:48:52.004592   95581 build_images.go:161] Building image from path: /tmp/build.3991758209.tar
I0417 18:48:52.004671   95581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0417 18:48:52.018198   95581 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3991758209.tar
I0417 18:48:52.023444   95581 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3991758209.tar: stat -c "%s %y" /var/lib/minikube/build/build.3991758209.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3991758209.tar': No such file or directory
I0417 18:48:52.023492   95581 ssh_runner.go:362] scp /tmp/build.3991758209.tar --> /var/lib/minikube/build/build.3991758209.tar (3072 bytes)
I0417 18:48:52.064152   95581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3991758209
I0417 18:48:52.076041   95581 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3991758209 -xf /var/lib/minikube/build/build.3991758209.tar
I0417 18:48:52.088858   95581 crio.go:315] Building image: /var/lib/minikube/build/build.3991758209
I0417 18:48:52.088934   95581 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-169848 /var/lib/minikube/build/build.3991758209 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0417 18:48:54.465733   95581 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-169848 /var/lib/minikube/build/build.3991758209 --cgroup-manager=cgroupfs: (2.376772435s)
I0417 18:48:54.465812   95581 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3991758209
I0417 18:48:54.496303   95581 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3991758209.tar
I0417 18:48:54.529318   95581 build_images.go:217] Built localhost/my-image:functional-169848 from /tmp/build.3991758209.tar
I0417 18:48:54.529370   95581 build_images.go:133] succeeded building to: functional-169848
I0417 18:48:54.529377   95581 build_images.go:134] failed building to: 
I0417 18:48:54.529418   95581 main.go:141] libmachine: Making call to close driver server
I0417 18:48:54.529453   95581 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:54.529788   95581 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:54.529827   95581 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:48:54.529837   95581 main.go:141] libmachine: (functional-169848) DBG | Closing plugin on server side
I0417 18:48:54.529843   95581 main.go:141] libmachine: Making call to close driver server
I0417 18:48:54.529878   95581 main.go:141] libmachine: (functional-169848) Calling .Close
I0417 18:48:54.530121   95581 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:48:54.530143   95581 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.326168237s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-169848
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr: (4.223416342s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr: (2.551911724s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdspecific-port3515935833/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.928948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdspecific-port3515935833/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "sudo umount -f /mount-9p": exit status 1 (222.083888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-169848 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdspecific-port3515935833/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.152144107s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-169848
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image load --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr: (5.152726015s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T" /mount1: exit status 1 (266.64322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-169848 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3526115051/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service list -o json
functional_test.go:1490: Took "294.831449ms" to run "out/minikube-linux-amd64 -p functional-169848 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.38:30755
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.38:30755
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image save gcr.io/google-containers/addon-resizer:functional-169848 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image save gcr.io/google-containers/addon-resizer:functional-169848 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.381839727s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image rm gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image rm gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr: (1.161379159s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.299920893s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-169848
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 image save --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-169848 image save --daemon gcr.io/google-containers/addon-resizer:functional-169848 --alsologtostderr: (1.163903238s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-169848
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-169848
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-169848
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-169848
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-467706 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-467706 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.429949951s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-467706 -- rollout status deployment/busybox: (3.267853416s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-gzsn2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-r65s7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-xg855 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-gzsn2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-r65s7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-xg855 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-gzsn2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-r65s7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-xg855 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-gzsn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-gzsn2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-r65s7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-r65s7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-xg855 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-467706 -- exec busybox-fc5497c4f-xg855 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-467706 -v=7 --alsologtostderr
E0417 18:53:19.319030   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.324974   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.335319   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.355660   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.396102   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.477051   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.637441   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:19.958511   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:20.598944   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:21.880061   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:24.440996   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:29.561547   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 18:53:39.801758   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-467706 -v=7 --alsologtostderr: (44.149001978s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-467706 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp testdata/cp-test.txt ha-467706:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706:/home/docker/cp-test.txt ha-467706-m02:/home/docker/cp-test_ha-467706_ha-467706-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test_ha-467706_ha-467706-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706:/home/docker/cp-test.txt ha-467706-m03:/home/docker/cp-test_ha-467706_ha-467706-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test_ha-467706_ha-467706-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706:/home/docker/cp-test.txt ha-467706-m04:/home/docker/cp-test_ha-467706_ha-467706-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test_ha-467706_ha-467706-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp testdata/cp-test.txt ha-467706-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m02:/home/docker/cp-test.txt ha-467706:/home/docker/cp-test_ha-467706-m02_ha-467706.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test_ha-467706-m02_ha-467706.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m02:/home/docker/cp-test.txt ha-467706-m03:/home/docker/cp-test_ha-467706-m02_ha-467706-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test_ha-467706-m02_ha-467706-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m02:/home/docker/cp-test.txt ha-467706-m04:/home/docker/cp-test_ha-467706-m02_ha-467706-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test_ha-467706-m02_ha-467706-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp testdata/cp-test.txt ha-467706-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt ha-467706:/home/docker/cp-test_ha-467706-m03_ha-467706.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test_ha-467706-m03_ha-467706.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt ha-467706-m02:/home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test_ha-467706-m03_ha-467706-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m03:/home/docker/cp-test.txt ha-467706-m04:/home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test_ha-467706-m03_ha-467706-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp testdata/cp-test.txt ha-467706-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2860960912/001/cp-test_ha-467706-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt ha-467706:/home/docker/cp-test_ha-467706-m04_ha-467706.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706 "sudo cat /home/docker/cp-test_ha-467706-m04_ha-467706.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt ha-467706-m02:/home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m02 "sudo cat /home/docker/cp-test_ha-467706-m04_ha-467706-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 cp ha-467706-m04:/home/docker/cp-test.txt ha-467706-m03:/home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 ssh -n ha-467706-m03 "sudo cat /home/docker/cp-test_ha-467706-m04_ha-467706-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.491546347s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-467706 node delete m03 -v=7 --alsologtostderr: (16.718525772s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (362.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-467706 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0417 19:08:19.319318   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
E0417 19:09:42.365351   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-467706 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m1.662675019s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (362.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-467706 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-467706 --control-plane -v=7 --alsologtostderr: (1m15.099301265s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-467706 status -v=7 --alsologtostderr
E0417 19:13:19.319081   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-558518 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-558518 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.062818056s)
--- PASS: TestJSONOutput/start/Command (99.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-558518 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-558518 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-558518 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-558518 --output=json --user=testUser: (7.351163705s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-073058 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-073058 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.576026ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6326f7e-6024-4762-883d-c8c8f8a2c07d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-073058] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f33e0618-c05b-4c80-9f45-e4499c0e029e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18665"}}
	{"specversion":"1.0","id":"0506682e-9b8a-4c33-b4c7-a5d4213960a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c7df921a-6e2b-42ba-9677-984ca836fbe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig"}}
	{"specversion":"1.0","id":"752e8388-17dc-4543-bbe1-7da39218c35e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube"}}
	{"specversion":"1.0","id":"443ba154-eb29-4a5f-9a40-7b2edf063589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bf96b486-e6c4-48a2-ad4c-5aa2e89b8d4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a497d3ed-0240-4172-98a5-583fa216d240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-073058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-073058
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (91.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-258987 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-258987 --driver=kvm2  --container-runtime=crio: (44.09209526s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-261780 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-261780 --driver=kvm2  --container-runtime=crio: (44.332193848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-258987
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-261780
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-261780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-261780
helpers_test.go:175: Cleaning up "first-258987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-258987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-258987: (1.006390685s)
--- PASS: TestMinikubeProfile (91.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-000637 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-000637 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.154307674s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-000637 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-000637 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-018014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-018014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.183646113s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-000637 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-018014
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-018014: (1.412110507s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-018014
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-018014: (22.397844635s)
--- PASS: TestMountStart/serial/RestartStopped (23.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-018014 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0417 19:18:19.318998   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.912432108s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-990943 -- rollout status deployment/busybox: (2.878785815s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-dzp25 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-th5ps -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-dzp25 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-th5ps -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-dzp25 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-th5ps -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-dzp25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-dzp25 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-th5ps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990943 -- exec busybox-fc5497c4f-th5ps -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-990943 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-990943 -v 3 --alsologtostderr: (39.42325133s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-990943 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp testdata/cp-test.txt multinode-990943:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943:/home/docker/cp-test.txt multinode-990943-m02:/home/docker/cp-test_multinode-990943_multinode-990943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test_multinode-990943_multinode-990943-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943:/home/docker/cp-test.txt multinode-990943-m03:/home/docker/cp-test_multinode-990943_multinode-990943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test_multinode-990943_multinode-990943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp testdata/cp-test.txt multinode-990943-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt multinode-990943:/home/docker/cp-test_multinode-990943-m02_multinode-990943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test_multinode-990943-m02_multinode-990943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m02:/home/docker/cp-test.txt multinode-990943-m03:/home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test_multinode-990943-m02_multinode-990943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp testdata/cp-test.txt multinode-990943-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1613524278/001/cp-test_multinode-990943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt multinode-990943:/home/docker/cp-test_multinode-990943-m03_multinode-990943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943 "sudo cat /home/docker/cp-test_multinode-990943-m03_multinode-990943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 cp multinode-990943-m03:/home/docker/cp-test.txt multinode-990943-m02:/home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 ssh -n multinode-990943-m02 "sudo cat /home/docker/cp-test_multinode-990943-m03_multinode-990943-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-990943 node stop m03: (1.538605116s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990943 status: exit status 7 (426.787307ms)

                                                
                                                
-- stdout --
	multinode-990943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-990943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-990943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr: exit status 7 (432.524633ms)

                                                
                                                
-- stdout --
	multinode-990943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-990943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-990943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 19:20:43.022373  111556 out.go:291] Setting OutFile to fd 1 ...
	I0417 19:20:43.022499  111556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:20:43.022508  111556 out.go:304] Setting ErrFile to fd 2...
	I0417 19:20:43.022512  111556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 19:20:43.022701  111556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75973/.minikube/bin
	I0417 19:20:43.022865  111556 out.go:298] Setting JSON to false
	I0417 19:20:43.022893  111556 mustload.go:65] Loading cluster: multinode-990943
	I0417 19:20:43.023000  111556 notify.go:220] Checking for updates...
	I0417 19:20:43.023284  111556 config.go:182] Loaded profile config "multinode-990943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0417 19:20:43.023306  111556 status.go:255] checking status of multinode-990943 ...
	I0417 19:20:43.023822  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.023891  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.039567  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0417 19:20:43.040021  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.040550  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.040573  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.041011  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.041220  111556 main.go:141] libmachine: (multinode-990943) Calling .GetState
	I0417 19:20:43.042842  111556 status.go:330] multinode-990943 host status = "Running" (err=<nil>)
	I0417 19:20:43.042861  111556 host.go:66] Checking if "multinode-990943" exists ...
	I0417 19:20:43.043170  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.043215  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.058765  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I0417 19:20:43.059220  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.059728  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.059769  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.060157  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.060367  111556 main.go:141] libmachine: (multinode-990943) Calling .GetIP
	I0417 19:20:43.063379  111556 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:20:43.063987  111556 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:20:43.064018  111556 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:20:43.064130  111556 host.go:66] Checking if "multinode-990943" exists ...
	I0417 19:20:43.064499  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.064565  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.080832  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35365
	I0417 19:20:43.081222  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.081652  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.081673  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.081990  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.082228  111556 main.go:141] libmachine: (multinode-990943) Calling .DriverName
	I0417 19:20:43.082431  111556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:20:43.082456  111556 main.go:141] libmachine: (multinode-990943) Calling .GetSSHHostname
	I0417 19:20:43.085199  111556 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:20:43.085615  111556 main.go:141] libmachine: (multinode-990943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:a9:e3", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:18:22 +0000 UTC Type:0 Mac:52:54:00:58:a9:e3 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-990943 Clientid:01:52:54:00:58:a9:e3}
	I0417 19:20:43.085648  111556 main.go:141] libmachine: (multinode-990943) DBG | domain multinode-990943 has defined IP address 192.168.39.106 and MAC address 52:54:00:58:a9:e3 in network mk-multinode-990943
	I0417 19:20:43.085760  111556 main.go:141] libmachine: (multinode-990943) Calling .GetSSHPort
	I0417 19:20:43.085891  111556 main.go:141] libmachine: (multinode-990943) Calling .GetSSHKeyPath
	I0417 19:20:43.086020  111556 main.go:141] libmachine: (multinode-990943) Calling .GetSSHUsername
	I0417 19:20:43.086166  111556 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943/id_rsa Username:docker}
	I0417 19:20:43.164991  111556 ssh_runner.go:195] Run: systemctl --version
	I0417 19:20:43.171540  111556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:20:43.185372  111556 kubeconfig.go:125] found "multinode-990943" server: "https://192.168.39.106:8443"
	I0417 19:20:43.185408  111556 api_server.go:166] Checking apiserver status ...
	I0417 19:20:43.185443  111556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 19:20:43.198923  111556 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	W0417 19:20:43.208671  111556 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 19:20:43.208722  111556 ssh_runner.go:195] Run: ls
	I0417 19:20:43.213280  111556 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0417 19:20:43.218913  111556 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0417 19:20:43.218949  111556 status.go:422] multinode-990943 apiserver status = Running (err=<nil>)
	I0417 19:20:43.218964  111556 status.go:257] multinode-990943 status: &{Name:multinode-990943 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:20:43.218989  111556 status.go:255] checking status of multinode-990943-m02 ...
	I0417 19:20:43.219336  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.219373  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.234712  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
	I0417 19:20:43.235087  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.235635  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.235650  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.236023  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.236213  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetState
	I0417 19:20:43.237809  111556 status.go:330] multinode-990943-m02 host status = "Running" (err=<nil>)
	I0417 19:20:43.237826  111556 host.go:66] Checking if "multinode-990943-m02" exists ...
	I0417 19:20:43.238188  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.238260  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.253150  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I0417 19:20:43.253543  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.254076  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.254100  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.254357  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.254538  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetIP
	I0417 19:20:43.257371  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | domain multinode-990943-m02 has defined MAC address 52:54:00:e7:d6:18 in network mk-multinode-990943
	I0417 19:20:43.257775  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d6:18", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:19:24 +0000 UTC Type:0 Mac:52:54:00:e7:d6:18 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-990943-m02 Clientid:01:52:54:00:e7:d6:18}
	I0417 19:20:43.257807  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | domain multinode-990943-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:e7:d6:18 in network mk-multinode-990943
	I0417 19:20:43.257959  111556 host.go:66] Checking if "multinode-990943-m02" exists ...
	I0417 19:20:43.258242  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.258275  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.273001  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0417 19:20:43.273421  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.273898  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.273922  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.274228  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.274436  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .DriverName
	I0417 19:20:43.274599  111556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 19:20:43.274629  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetSSHHostname
	I0417 19:20:43.277298  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | domain multinode-990943-m02 has defined MAC address 52:54:00:e7:d6:18 in network mk-multinode-990943
	I0417 19:20:43.277720  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d6:18", ip: ""} in network mk-multinode-990943: {Iface:virbr1 ExpiryTime:2024-04-17 20:19:24 +0000 UTC Type:0 Mac:52:54:00:e7:d6:18 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-990943-m02 Clientid:01:52:54:00:e7:d6:18}
	I0417 19:20:43.277742  111556 main.go:141] libmachine: (multinode-990943-m02) DBG | domain multinode-990943-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:e7:d6:18 in network mk-multinode-990943
	I0417 19:20:43.277920  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetSSHPort
	I0417 19:20:43.278085  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetSSHKeyPath
	I0417 19:20:43.278236  111556 main.go:141] libmachine: (multinode-990943-m02) Calling .GetSSHUsername
	I0417 19:20:43.278364  111556 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75973/.minikube/machines/multinode-990943-m02/id_rsa Username:docker}
	I0417 19:20:43.365017  111556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 19:20:43.380524  111556 status.go:257] multinode-990943-m02 status: &{Name:multinode-990943-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0417 19:20:43.380556  111556 status.go:255] checking status of multinode-990943-m03 ...
	I0417 19:20:43.380889  111556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0417 19:20:43.380922  111556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 19:20:43.396019  111556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0417 19:20:43.396438  111556 main.go:141] libmachine: () Calling .GetVersion
	I0417 19:20:43.396933  111556 main.go:141] libmachine: Using API Version  1
	I0417 19:20:43.396961  111556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 19:20:43.397291  111556 main.go:141] libmachine: () Calling .GetMachineName
	I0417 19:20:43.397476  111556 main.go:141] libmachine: (multinode-990943-m03) Calling .GetState
	I0417 19:20:43.399011  111556 status.go:330] multinode-990943-m03 host status = "Stopped" (err=<nil>)
	I0417 19:20:43.399027  111556 status.go:343] host is not running, skipping remaining checks
	I0417 19:20:43.399033  111556 status.go:257] multinode-990943-m03 status: &{Name:multinode-990943-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-990943 node start m03 -v=7 --alsologtostderr: (29.123461629s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-990943 node delete m03: (1.766719176s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
E0417 19:26:22.365929   83207 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75973/.minikube/profiles/functional-169848/client.crt: no such file or directory
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (171.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990943 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990943 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m50.825983144s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990943 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (171.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990943
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990943-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-990943-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (76.459299ms)

                                                
                                                
-- stdout --
	* [multinode-990943-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-990943-m02' is duplicated with machine name 'multinode-990943-m02' in profile 'multinode-990943'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990943-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990943-m03 --driver=kvm2  --container-runtime=crio: (45.926620392s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-990943
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-990943: exit status 80 (230.483687ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-990943 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-990943-m03 already exists in multinode-990943-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-990943-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-990943-m03: (1.027457618s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.32s)

                                                
                                    
x
+
TestScheduledStopUnix (116.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-027306 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-027306 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.308089373s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027306 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-027306 -n scheduled-stop-027306
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027306 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027306 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027306 -n scheduled-stop-027306
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-027306
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-027306 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-027306
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-027306: exit status 7 (80.581613ms)

                                                
                                                
-- stdout --
	scheduled-stop-027306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027306 -n scheduled-stop-027306
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-027306 -n scheduled-stop-027306: exit status 7 (74.109818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-027306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-027306
--- PASS: TestScheduledStopUnix (116.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (195.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2972651117 start -p running-upgrade-419258 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2972651117 start -p running-upgrade-419258 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.489302457s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-419258 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-419258 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.160701244s)
helpers_test.go:175: Cleaning up "running-upgrade-419258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-419258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-419258: (1.150694796s)
--- PASS: TestRunningBinaryUpgrade (195.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (99.124389ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-716489] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-716489 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-716489 --driver=kvm2  --container-runtime=crio: (1m29.559090871s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-716489 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.639425877s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-716489 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-716489 status -o json: exit status 2 (297.394665ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-716489","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-716489
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-716489: (1.157578145s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-716489 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.897474757s)
--- PASS: TestNoKubernetes/serial/Start (49.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-716489 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-716489 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.493109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-716489
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-716489: (1.343768249s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-716489 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-716489 --driver=kvm2  --container-runtime=crio: (1m7.163509736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (67.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-716489 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-716489 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.036241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3179574010 start -p stopped-upgrade-684163 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3179574010 start -p stopped-upgrade-684163 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.739148623s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3179574010 -p stopped-upgrade-684163 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3179574010 -p stopped-upgrade-684163 stop: (2.131369811s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-684163 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-684163 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.843506214s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.71s)

                                                
                                    
x
+
TestPause/serial/Start (111.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-646953 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-646953 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.672381296s)
--- PASS: TestPause/serial/Start (111.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-684163
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard